Next Article in Journal
aSGD: Stochastic Gradient Descent with Adaptive Batch Size for Every Parameter
Next Article in Special Issue
Impact of Stratum Composition Changes on the Accuracy of the Estimates in a Sample Survey
Previous Article in Journal
Differential Neural Network-Based Nonparametric Identification of Eye Response to Enforced Head Motion
Previous Article in Special Issue
Generalized Confidence Intervals for Zero-Inflated Pareto Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Subgroup Identification and Regression Analysis of Clustered and Heterogeneous Interval-Censored Data

School of Mathematics, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(6), 862; https://doi.org/10.3390/math10060862
Submission received: 19 January 2022 / Revised: 28 February 2022 / Accepted: 4 March 2022 / Published: 8 March 2022
(This article belongs to the Special Issue Advances in Computational Statistics and Applications)

Abstract

:
Clustered and heterogeneous interval-censored data occur in many fields such as medical studies. For example, in a migraine study with the Netherlands Twin Registry, the information including time to diagnosis of migraine and gender was collected for 3975 monozygotic and dizygotic twins. Since each study subject is observed only at discrete and periodic follow-up time points, the failure times of interest (i.e., the time when the individual first had a migraine) are known only to belong to certain intervals and hence are interval-censored. Furthermore, these twins come from different genetic backgrounds and may be associated with differential risks for developing migraines. For simultaneous subgroup identification and regression analysis of such data, we propose a latent Cox model where the number of subgroups is not assumed a priori but rather data-driven estimated. The nonparametric maximum likelihood method and an EM algorithm with monotone ascent property are also developed for estimating the model parameters. Simulation studies are conducted to assess the finite sample performance of the proposed estimation procedure. We further illustrate the proposed methodologies by an empirical analysis of migraine data.

1. Introduction

Subgroup identification for heterogeneous data has become a ubiquitous problem in a broad range of applications including social science, marketing, and clinical trials. For instance, in clinical trials, heterogeneity may arise due to underlying differences among groups of patients. For patients with similar attributes, disease progression or treatment effects often exhibit close patterns. Therefore, it is valuable to classify the patients into a few homogeneous groups and tailor a disease treatment specifically for each subgroup to optimize the treatment effect. Conceptually, analyzing data from a heterogeneous population consisting of a few homogeneous subgroups is to view data as generated from a mixture of subgroups and leads to a finite mixture model. In unsupervised learning, parametric mixture models have been widely used in many fields. The books [1,2,3,4] and the review paper [5] provide a thorough introduction and applications on finite mixture models. In addition, a mixture model can be applied in reliability analysis ([6,7,8]).
Interval-censored data is a common type of data in real applications. In many clinical applications, the observations are recorded periodically, and the failure times of interest are known between each period, which causes the difficulties on analyzing on this type of data. Refs. [9,10,11] reviewed existing methods that applied the parametric models and nonparametric estimations for survival curves based on interval-censored data. Particularly, Ref. [12] proposed the nonparametric way for survival distribution estimation and [13] provided the score statistics for parameter estimation for interval-censored data. However, as for heterogeneous interval-censored data, mixture models should be considered for subgroup classification. Only limited research studies are targeting this area. Ref. [14] proposed the estimation methods for Gaussian mixtures using MCMC methodology and [15] proposed a semi-parametric mixture model in the field of antimicrobial resistance with interval-censored observations. However, the methods mentioned above are density estimation using a mixture model without conducting regression analysis on other observed covariates. There exist computational difficulties on conducting group identification and regression analysis based on mixture models in survival analysis for interval-censored data.
In this paper, motivated by the Netherlands twin study on migraines, we propose a new latent Cox model for analyzing clustered and heterogeneous interval-censored data. The population is separated into a few subgroups according to the covariate effects. The baseline hazard functions for the subgroups as well as the number of subgroups are left unspecified to avoid restrictive distributional assumptions and allow for flexibility.
Compared with existing mixture survival models ([16,17,18]) in the literature for right-censored data, the proposed model aims to accomplish simultaneous subgroup identification and regression analysis. It is important to note that compared with right-censored data, for interval-censored survival data, the incomplete data information and computational complexity bring greater challenges for the aforementioned tasks. Moreover, we investigate the heterogeneity driven by unknown covariate effects without specifying the baseline hazard functions and the number of subgroups, which make the model estimation more challenging and the computation more intensive. Our new proposed nonparametric maximum likelihood estimation approach separates the parameters during estimation, which greatly reduces the computational complexity. In addition, the proposed EM algorithm has the monotone ascent property for estimating the model parameters. Numerical studies demonstrate its good performance. A modified Bayesian information criterion is also proposed to select the number of mixing components [19].
The rest of the paper is organized as follows. In Section 2, we present the latent Cox model for clustered interval-censored data. In Section 3, we develop an estimation procedure for the proposed model using the EM algorithm. Selecting the number of subgroups and assessing the finite-sample performance of the proposed methods are presented in Section 4. We further provide an application to migraine data to illustrate the practical utilities of the proposed methods in Section 5.

2. Data and Model

Let T i j denote the response of interest (i.e., the failure time) for the jth subject in the ith cluster, where j = 1 , , n i , i = 1 , , n , n i is the number of subjects in the ith cluster and n is the number of clusters in the dataset. Furthermore, T i j is interval-censored and only known to belong to the interval ( L i j , R i j ] . The q-dimensional vector of covariates is denoted by X i j = ( X i j 1 , , X i j q ) . The observations are summarized as Y o b s = { ( L i j , R i j ] , X i j ; i = 1 , , n , j = 1 , , n i } . For accommodating heterogeneous covariate effects that may exist among subgroups, we propose a latent Cox model for simultaneous subgroup identification and regression analysis. Specifically, the instantaneous hazard function for the jth subject in the ith cluster
λ i j ( t ) = λ 0 i ( t ) exp ( X i j β i ) , i = 1 , , n .
As in [10,20], we make the following two assumptions. (A1) L i j and R i j are random and (A2) T i j are independent of ( L i j , R i j ] . It is important to note that the baseline hazard functions and the covariate effects are allowed to vary across the clusters and accordingly accommodate the heterogeneity. In the same spirit of mixture modeling and for extrapolation and interpretation purposes, further assume that n clusters are from M subgroups with M 1 and the clusters in the same subgroup have the same baseline hazard and covariate effects. In other words, let G = ( G 1 , , G M ) be a partition of { 1 , , n } . Let the mixing probabilities be π m , m = 1 , , M and π 1 + + π M = 1 . For each cluster i = 1 , , n , with probability π m , we have i G m and λ 0 i ( · ) = λ 0 m ( · ) and β i = β m . In practice, the number of subgroups M is unknown and will be estimated in a data-driven way. However, in practice, it is usually reasonable to assume that M is much smaller than n. Our goal is to estimate M and the model parameters Λ 0 = ( Λ 01 , , Λ 0 M ) , β = ( β 1 , , β M ) and Π = ( π 1 , , π M ) . The observed likelihood function of the i-th cluster { L i j , R i j , X i j } j = 1 n i can be written as
f i ( Λ 0 , β , Π | Y o b s ) = m = 1 M π m · f i ( m ) ( Λ 0 m , β m ) ,
where f i ( m ) ( Λ 0 m , β m ) denotes the likelihood function of the i-th cluster when it belongs to the m-th subgroup. When the i-th cluster comes from the m-th subgroup, its hazard function λ i j ( m ) ( t ) = λ 0 m ( t ) exp ( X i j β m ) for j = 1 , , n i where λ 0 m ( · ) is the unspecified baseline hazard function and Λ 0 m ( · ) is the corresponding cumulative baseline risk of the m-th subgroup. β m is the corresponding effect of X i j in the m-th subgroup. Furthermore, we suppose that T i j is monitored at a sequence of positive time points U i j 1 < < U i j K i j and { U i j k : k = 1 , , K i j , i = 1 , , n , j = 1 , , n i } are independent of { T i j : i = 1 , , n , j = 1 , , n i } as a conventional assumption for interval-censored data. Let ( L i j , R i j ] be the shortest time interval that brackets T i j , i.e., L i j = max { U i j k : U i j k < T i j , k = 0 , , K i j } and R i j = min { U i j k : U i j k T i j , k = 1 , , K i j + 1 } , where U i j 0 = 0 and U i j , K i j + 1 = . Then, we have
f i ( m ) ( Λ 0 m , β m ) = j = 1 n i exp 0 L i j λ 0 m ( t k ) exp ( X i j β m ) exp 0 R i j λ 0 m ( t k ) exp ( X i j β m ) ,
and the log-likelihood ( Λ 0 , β , Π | Y o b s ) based on the observed data { ( L i j , R i j , X i j ) , i = 1 , , n , j = 1 , , n i } is
( Λ 0 , β , Π | Y o b s ) = i = 1 n log m = 1 M π m · f i ( m ) ( Λ 0 m , β m ) .
To estimate Λ 0 , β and Π , we adopt the nonparametric maximum likelihood estimation approach. Let 0 = t 0 < t 1 < < t K < be the ordered sequence of all L i j and R i j with R i j < . The estimator for Λ 0 m is a step function that jumps only at those time points with respective jump sizes of 0 , λ 0 m ( t 1 ) , , λ 0 m ( t K ) . It follows that (3) can be rewritten as
i = 1 n log m = 1 M π m j = 1 n i exp t k L i j λ 0 m ( t k ) exp ( X i j β m ) exp t k R i j λ 0 m ( t k ) exp ( X i j β m ) .

3. Estimation and Algorithm

The observed data with unknown subgroup memberships can be formulated as an incomplete-data problem in the EM framework. We view the observed data { ( L i j , R i j ] , X i j ; i = 1 , , n , j = 1 , , n i } as being incomplete and introduce the unobserved Bernoulli random variables Z i m Bernoull ( π m ) for m = 1 , , M ,
Z i m = 1 , if the i - th cluster { ( L i j , R i j ] , X i j } j = 1 n i belongs to the m - th subgroup , 0 , otherwise ,
and Poisson random variables W m i j k ( k = 1 , , K ) with means λ 0 m ( t k ) exp ( X i j β m ) . Define A m i j = t k L i j W m i j k and B m i j = I ( R i j < ) L i j t k R i j W m i j k . Since the probability of observing A m i j = 0 and B m i j > 0 is exp [ t k L i j λ 0 m ( t k ) exp ( X i j β m ) ] I ( R i j < ) exp [ t k R i j λ 0 m ( t k ) exp ( X i j β m ) ] , the likelihood from the observations { ( L i j , R i j ] , X i j , A m i j = 0 , B m i j > 0 : i = 1 , n ; j = 1 , , n i , m = 1 , , M } is the same as (4). Therefore, we develop an EM algorithm to maximize (4) by treating W m i j k ( t k R i j * ) , Z i m as missing data, where R i j * = L i j I ( R i j = ) + R i j I ( R i j < ) . Then, the complete-data log-likelihood is proportional to
c o m ( Λ 0 , β , Π ) i = 1 n m = 1 M Z i m { log ( π m ) + I ( t k R i j * ) k = 1 K j = 1 n i [ W m i j k log ( λ 0 m ( t k ) ) + W m i j k X i j β m λ 0 m ( t k ) exp ( X i j β m ) ] } .
In the M-step, we maximize (6) for any given β m , then we have
π ^ m = i = 1 n Z i m / n ,
λ ^ 0 m ( t k ) = i = 1 n j = 1 n i I ( t k R i j * ) Z i m W m i j k i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ) ,
where m = 1 , , M ; k = 1 , , K . After incorporating (8) into (6), we obtain
c o m ( β ) m = 1 M i = 1 n j = 1 n i k = 1 K I ( t k R i j * ) Z i m W m i j k X i j β m log i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ) .
To update β m , we employ the following Newton–Raphson algorithm
β m ( t + 1 ) = β m ( t ) + I 1 ( β m ( t ) ) c o m ( β m ( t ) )
where
c o m ( β ( t ) ) = i = 1 n j = 1 n i k = 1 K I ( t k R i j * ) Z i m W m i j k × [ X i j i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ( t ) ) X i j i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ( t ) ) ] ,
I 1 ( β ( t ) ) = i = 1 n j = 1 n i k = 1 K I ( t k R i j * ) Z i m W m i j k [ i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ( t ) ) X i j X i j i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ( t ) ) i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ( t ) ) X i j i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ( t ) ) X i j i = 1 n j = 1 n i I ( t k R i j * ) Z i m exp ( X i j β m ( t ) ) 2 ] .
In the E-step, we evaluate the conditional expectations of Z i m and W m i j k involved in the M-step. The posterior mean of Z i m is
E ^ ( Z i m ) = π m · f i ( m ) ( Λ 0 m , β m ) i = 1 n π m · f i ( m ) ( Λ 0 m , β m ) ,
where
f i ( m ) ( Λ 0 m , β m ) = j = 1 n i exp t k L i j λ 0 m ( t k ) exp ( X i j β m ) exp t k R i j λ 0 m ( t k ) exp ( X i j β m ) .
In addition, the conditional expectation of W m i j k for t k R i j * is
E ^ ( W m i j k ) = I ( L i j < t k R i j < ) λ 0 m ( t k ) exp ( X i j β m ) 1 exp L i j < t k R i j λ 0 m ( t k ) exp ( X i j β m ) .
Now, we summarize iteration processes between the E-step and M-step for the proposed algorithm as follows.
Step 1. Give initial values of β , Π , and Λ 0 .
Step 2. Calculate the conditional expectations of Z i m and W m i j k via (10) and (11).
Step 3. Replace Z i m in (7) by E ^ ( Z i m ) and update the estimate of Π via (7).
Step 4. Replace Z i m and W m i j k in (8) by E ^ ( Z i m ) and E ^ ( W m i j k ) , then update the estimate of Λ 0 via (8).
Step 5. Replace Z i m and W m i j k in (9) by E ^ ( Z i m ) and E ^ ( W m i j k ) , then update the estimate of β via (9).
Step 6. Iterate steps 2 to 5 until convergence.
We iterate between the E-step and M-step until the sum of the absolute differences of the estimates at two iterations is less than ϵ , i.e., the stopping criterion is set to be
| | β ( t + 1 ) β ( t ) | | 1 + | | Π ( t + 1 ) Π ( t ) | | 1 + | | Λ 0 ( t + 1 ) Λ 0 ( t ) | | 1 < ϵ ,
where | | α | | 1 indicates the L 1 norm for α , i.e., | | α | | 1 = i = 1 q | α i | with α = ( α 1 , , α q ) .
In the following section, we let ϵ = 10 3 , and simulation studies are conducted to assess the finite sample performance of the proposed method and in particular, we propose a modified BIC criterion to select the number of subgroups M.

4. Simulation Study

As in the mixture model [21], the number of subgroups M in the proposed model is unknown and will be estimated in a data-driven manner. Here, we use the modified Bayesian information criterion (BIC [19]) to choose the number of components M by minimizing the criterion function
BIC ( M ) = 2 ( Λ ^ 0 , β ^ , Π ^ ) + M q log ( N ) ,
where β ^ = ( β ^ 1 , , β ^ M ) , N = i = 1 n n i is the sample size, and q is the dimension of β i .
In the following, we conduct a set of simulation studies to assess the finite sample performance of our proposed method.
Example 1.
We generate clustered interval-censored data from a latent Cox model with two covariates and three subgroups
λ i j ( t ) = λ 0 i ( t ) exp ( X i j β i ) , i = 1 , , n .
where the covariates X i j 1 and X i j 2 are independent and both follow the standard normal distribution. The n clusters are randomly assigned into three subgroups with equal probabilities, i.e., we let P ( i G 1 ) = P ( i G 2 ) = P ( i G 3 ) = 1 / 3 , so that β i = ( 0.5 , 3 ) , Λ 0 i ( t ) = ( t / 4 ) 2 for i G 1 , β i = ( 2 , 1 ) , Λ 0 i ( t ) = log ( 1 + t / 8 ) for i G 2 and β i = ( 2 , 3 ) , Λ 0 i ( t ) = 2 t for i G 3 . The cluster size is set to be m for each cluster. We consider different combinations of the number of clusters (n) and the cluster size (m) to assess the performance of the proposed estimation procedure.
We identify the number of subgroups M by minimizing the modified BIC given in (12). Table 1 presents the mean, median, and standard error (s.d.) of the estimated number of subgroups, denoted by M ^ , and the empirical percentage of M ^ equal to the true number of subgroups based on 100 replications. It can be seen from Table 1 that for ( n , m ) = ( 400 , 4 ) and ( n , m ) = ( 800 , 2 ) , the BIC identifies the true number of subgroups among all 100 replications, indicating its favorable performance. Table 2 reports the estimation results for the regression coefficients β ^ 1 , β ^ 2 , and β ^ 3 , and the mixing probabilities π ^ 1 and π ^ 2 , based on 100 replications. The proposed method preforms well and yields the estimators with small biases for the case of ( n , m ) = ( 400 , 4 ) . However, as we increase the number of clusters to 800 and decrease the number of subjects within each cluster to 2, the biases of some estimators become larger even though the total sample sizes are the same for the two cases.
Example 2.
We simulate data from a latent Cox model with three covariates and two subgroups
λ i j ( t ) = λ 0 i ( t ) exp ( X i j β i ) , i = 1 , , n .
where the covariates X i j = ( X i j 1 , X i j 2 , X i j 3 ) are generated from a multivariate normal distribution with mean zero and a first-order autoregressive covariance structure Σ = ( σ s t ) with σ s t = 0.5 | s t | for s , t = 1 , 2 , 3 . The clusters are randomly assigned into two subgroups with equal probabilities, i.e., we let P ( i G 1 ) = P ( i G 2 ) = 1 / 2 , and β i = ( 0.5 , 1 , 2 ) , Λ 0 i ( t ) = 4 t 2 for i G 1 , β i = ( 0.5 , 1 , 2 ) , Λ 0 i ( t ) = log ( 1 + t / 8 ) for i G 2 . The cluster size is set to be m for all n clusters. We consider the cases of ( n , m ) = ( 400 , 3 ) and ( 600 , 2 ) . As in Example 1, we estimate the number of subgroups M by minimizing the modified BIC given in (12). Table 3 reports the mean, median, and standard error (s.d.) of the estimator M ^ and the empirical percentage of M ^ equal to the true number of subgroups based on 100 replications. We observe that the median of M ^ is equal to the true number of subgroups 2, and the mean also gets closer to 2 as the number of clusters increases. Moreover, the empirical percentage of correctly identifying the true number of subgroups is close to 1 as the cluster number becomes moderately large. The estimation results for the regression coefficients β and mixing probabilities Π are summarized in Table 4. It can be seen that in terms of the estimation accuracy, the proposed estimation procedure performs quite well and yields the estimators with small biases for ( n , m ) = ( 400 , 3 ) . Similar with Example 1, as we increase the number of clusters to 600 and decrease the number of subjects within each cluster to 2, the biases of some estimators become larger even though the total sample sizes are the same for the two cases.
Example 3.
We next generate data from the Cox model with two covariates
λ i j ( t ) = λ 0 ( t ) exp ( X i j β ) , i = 1 , , n .
where the covariates X i j = ( X i j 1 , X i j 2 ) are generated from a multivariate normal distribution with mean zero and a first-order autoregressive covariance structure Σ = ( σ ) s t with σ s t = 0.5 | s t | for s , t = 1 , 2 . We set β = ( 1 , 3 ) , Λ 0 ( t ) = t 2 / 16 and consider ( n , m ) = ( 200 , 4 ) or ( 400 , 2 ) . Note that the model corresponds to the latent Cox model with the true number of subgroups M being 1.
Based on the BIC criterion given in (12), we estimate the number of subgroups M and report the sample mean, median, and standard error (s.d.) of the estimated number of subgroups M ^ and the empirical percentage of M ^ equal to the true number of subgroups M based on 100 replications. We consider ( n , m ) = ( 200 , 4 ) and ( 400 , 2 ) . The results are given in Table 5. We observe that for each replication, the number of subgroups is correctly identified to be 1. The estimation results are summarized in Table 6. We find that the regression coefficients are estimated accurately with small biases for ( n , m ) = ( 200 , 4 ) . Similar with Examples 1 and 2, as we increase the number of clusters to 400 and decrease the number of subjects within each cluster to 2, the biases of some estimators become larger even though the total sample sizes are the same for the two cases. To assess the estimation accuracy of the cumulative baseline hazard rate function, by plotting them in Figure 1, we show the difference between the true cumulative hazard rate function Λ 0 ( t ) and the estimated baseline cumulative hazard curves Λ ^ 0 ( t ) . From Figure 1, it can be seen that two curves are quite close to each other during the time periods of ( 0 , 2 ) and ( 6 , 12 ) . However, because there are no sample points falling in the time period of ( 2 , 6 ) , the two curves exhibit a significant difference during this time period.

5. An Application to the Netherlands Twin Study on Migraine

We now apply the proposed model to analyze the Netherlands twin migraine data. The participants were volunteer members of the Netherlands Twin Registry, which is maintained by the Department of Biological Psychology at the Vrije Universiteit in Amsterdam [22]. The data were collected between 1991 and 2002 as part of an ongoing study of health, lifestyle, and genetics involving a large cohort of Dutch twins and their relatives. The primary response of interest in the migraine study is the time when the individual first had a migraine. Since the individuals were followed up on a periodic basis, the time to event may be known only to belong to intervals and hence be interval-censored. The twins form into the clusters with the cluster size 2 and come from different genetic backgrounds, which naturally can be classified into heterogeneous subgroups based on the genetic profiles of the twin families, which are not directly observed. Our analysis is based on 3975 monozygotic and dizygotic twin pairs. The left and right endpoints of the interval in which the individual had migraine (in years) are denoted by L i j and R i j , respectively, for the jth individual in the ith cluster. In this dataset, L i j and R i j are random and are independent of the event time. Furthermore, two covariates included in the model are gender (1 = male, 0 = female) and the type of twins (1 = monozygotic, 0 = dizygotic).
To explore the heterogeneity across the twins as indicated by their initial health status, household lifestyle, disease progression, and genetic profiles, we assume that the twins can be classified into a few homogeneous subgroups for each of which the conditional hazard function is postulated by a Cox model. We fit the migraine data by the proposed model with varying M, and the number of subgroups M is estimated by minimizing the BIC criterion function in (12). We found that by the BIC criterion, the optimal M is 3. In Table 7, for the number of subgroups M = 1 , 2 , 3 , 4 , we report the maximum log-likelihood values (LL), the BIC values (BIC), and the estimated parameters. We found that the model with three subgroups yields the best fit. The twins can be classified into three homogeneous subgroups with mixing probabilities of 77 % , 19 % , and 4 % , respectively. The estimated regression coefficients for three subgroups are also detailed in Table 7. In addition, the baseline cumulative hazard functions for three subgroups are plotted in Figure 2.
For the optimal model selected by BIC criterion, we calculate the empirical standard error and 95%CI of the parameters by the bootstrap method. We repeatedly generated bootstrap samples for G times and obtained bootstrap estimates ( Π ^ g , β ^ g ) , g = 1 , , G with G = 500 . Then, the normal-based 100 ( 1 α ) % bootstrap interval for π 1 is
[ π ¯ 1 z α / 2 s e ^ ( π 1 ) , π ¯ 1 + z α / 2 s e ^ ( π 1 ) ]
where π ¯ 1 = ( g = 1 G π ^ 1 g ) / G , s e ^ ( π 1 ) = [ g = 1 G ( π ^ 1 g π ¯ 1 ) 2 ] / ( G 1 ) . The bootstrap 100 ( 1 α ) % percentile interval for π 1 is [ π ^ 1 L , π ^ 1 U ] ; here, π ^ 1 L and π ^ 1 U are the ( α / 2 ) G -th and ( 1 α / 2 ) G -th order statistics of { π ^ 1 g } g = 1 G . The confidence intervals and the empirical standard errors for other parameters can be calculated in a similar way, and the results are reported in Table 8.

Author Contributions

Data curation, X.H. and J.X.; Formal analysis, X.H.; Funding acquisition, J.X.; Investigation, J.X.; Methodology, J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are very grateful to Marianne Jonker, D. I. Boomsma, and Aad van der vaart for sharing the migraine data from the Netherlands Twin Registry.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McLachlan, G.J.; Lee, S.X.; Rathnayake, S.I. Finite mixture models. Annu. Rev. Stat. Its Appl. 2019, 6, 355–378. [Google Scholar] [CrossRef]
  2. Everitt, B. Finite Mixture Distributions; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  3. Lindsay, B.G. Mixture Models: Theory, Geometry and Applications; NSF-CBMS Regional Conference Series in Probability and Statistics; JSTOR: New York, NY, USA, 1995; pp. 1–163. [Google Scholar]
  4. Titterington, D.M.; Afm, S.; Smith, A.F.; Makov, U. Statistical Analysis of Finite Mixture Distributions; John Wiley & Sons Incorporated: Chichester, UK; New York, NY, USA, 1985; Volume 198. [Google Scholar]
  5. McLachlan, G.; Chang, S. Mixture modelling for cluster analysis. Stat. Methods Med. Res. 2004, 13, 347–361. [Google Scholar] [CrossRef] [PubMed]
  6. Aslam, M.; Yousaf, R.; Ali, S. Two-Component Mixture of Transmuted Fréchet Distribution: Bayesian Estimation and Application in Reliability. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2021, 91, 309–336. [Google Scholar] [CrossRef]
  7. Rachid, A.; Naima, B. The Weibull log-logistic mixture distributions: Model, theory and application to lifetime data. Qual. Reliab. Eng. Int. 2021, 37, 1599–1627. [Google Scholar] [CrossRef]
  8. Sindhu, T.N.; Hussain, Z.; Aslam, M. Parameter and reliability estimation of inverted Maxwell mixture model. J. Stat. Manag. Syst. 2019, 22, 459–493. [Google Scholar] [CrossRef]
  9. Lindsey, J.C.; Ryan, L.M. Methods for interval-censored data. Stat. Med. 1998, 17, 219–238. [Google Scholar] [CrossRef]
  10. Zhang, Z.; Sun, J. Interval censoring. Stat. Methods Med. Res. 2010, 19, 53–70. [Google Scholar] [CrossRef]
  11. Sun, J. The Statistical Analysis of Interval-Censored Failure Time Data; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3. [Google Scholar]
  12. Turnbull, B.W. Nonparametric estimation of a survivorship function with doubly censored data. J. Am. Stat. Assoc. 1974, 69, 169–173. [Google Scholar] [CrossRef]
  13. Rabinowitz, D.; Tsiatis, A.; Aragon, J. Regression with interval-censored data. Biometrika 1995, 82, 501–513. [Google Scholar] [CrossRef]
  14. Komárek, A. A new R package for Bayesian estimation of multivariate normal mixtures allowing for selection of the number of components and interval-censored data. Comput. Stat. Data Anal. 2009, 53, 3932–3947. [Google Scholar] [CrossRef]
  15. Jaspers, S.; Aerts, M.; Verbeke, G.; Beloeil, P.A. A new semi-parametric mixture model for interval censored data, with applications in the field of antimicrobial resistance. Comput. Stat. Data Anal. 2014, 71, 30–42. [Google Scholar] [CrossRef]
  16. Peng, Y.; Dear, K.B. A nonparametric mixture model for cure rate estimation. Biometrics 2000, 56, 237–243. [Google Scholar] [CrossRef] [PubMed]
  17. Altstein, L.; Li, G. Latent subgroup analysis of a randomized clinical trial through a semiparametric accelerated failure time mixture model. Biometrics 2013, 69, 52–61. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Wu, R.F.; Zheng, M.; Yu, W. Subgroup analysis with time-to-event data under a logistic-Cox mixture model. Scand. J. Stat. 2016, 43, 863–878. [Google Scholar] [CrossRef]
  19. McLachlan, G.J.; Peel, D. Finite Mixture Model; Wiley: New York, NY, USA, 2000. [Google Scholar]
  20. Ma, L.; Hu, T.; Sun, J. Cox regression analysis of dependent interval-censored failure time data. Comput. Stat. Data Anal. 2016, 103, 79–90. [Google Scholar] [CrossRef] [Green Version]
  21. McLachlan, G.J.; Krishnan, T. The EM Algorithm and Extensions; Wiley: New York, NY, USA, 1997. [Google Scholar]
  22. Boomsma, D.I.; De Geus, E.J.; Vink, J.M.; Stubbe, J.H.; Distel, M.A.; Hottenga, J.J.; Posthuma, D.; Van Beijsterveldt, T.C.; Hudziak, J.J.; Bartels, M.; et al. Netherlands Twin Register: From twins to twin families. Twin Res. Hum. Genet. 2006, 9, 849–857. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The dotted and solid lines plot the true and estimated baseline cumulative hazard functions, respectively. The estimated baseline cumulative hazard function is the empirical average of the estimated baseline cumulative hazard functions based on 50 replications with ( n , m ) = ( 200 , 4 ) or ( 400 , 2 ) in Example 3.
Figure 1. The dotted and solid lines plot the true and estimated baseline cumulative hazard functions, respectively. The estimated baseline cumulative hazard function is the empirical average of the estimated baseline cumulative hazard functions based on 50 replications with ( n , m ) = ( 200 , 4 ) or ( 400 , 2 ) in Example 3.
Mathematics 10 00862 g001
Figure 2. The estimated baseline cumulative hazard functions for migraine data in the optimal model with three subgroups.
Figure 2. The estimated baseline cumulative hazard functions for migraine data in the optimal model with three subgroups.
Mathematics 10 00862 g002
Table 1. The sample mean, median, and standard error (s.d.) of M ^ and the empirical percentage (per) of M ^ equal to the true number of subgroups based on 100 replications in Example 1.
Table 1. The sample mean, median, and standard error (s.d.) of M ^ and the empirical percentage (per) of M ^ equal to the true number of subgroups based on 100 replications in Example 1.
( n , m ) MeanMedians.d.Per
( 400 , 4 ) 3301
( 800 , 2 ) 3301
Table 2. The empirical bias and sample standard error (s.d.) of the estimators π ^ 1 , π ^ 2 , β ^ 1 , β ^ 2 , and β ^ 3 based on 100 replications in Example 1.
Table 2. The empirical bias and sample standard error (s.d.) of the estimators π ^ 1 , π ^ 2 , β ^ 1 , β ^ 2 , and β ^ 3 based on 100 replications in Example 1.
( n , m ) π 1 π 2 β 1 β 2 β 3
β 11 β 12 β 21 β 22 β 31 β 32
( 400 , 4 ) True1/31/3 0.5 3 2 1 2 3
Bias 0.0210 0.01230.01940.12040.20070.10230.1131 0.0868
s.d.0.04250.02780.18230.36870.23790.19260.24070.2273
( 800 , 2 ) True1/31/3 0.5 3 2 1 2 3
Bias 0.0192 0.01540.10390.17900.30540.17890.2167 0.1255
s.d.0.04870.03020.20970.40580.33310.27230.27450.2339
Table 3. The sample mean, median, and standard error (s.d.) of M ^ and the empirical percentage (per) of M ^ equal to the true number of subgroups based on 100 replications in Example 2.
Table 3. The sample mean, median, and standard error (s.d.) of M ^ and the empirical percentage (per) of M ^ equal to the true number of subgroups based on 100 replications in Example 2.
( n , m ) MeanMedians.d.Per
n = ( 400 , 3 ) 2.0220.140.98
n = ( 600 , 2 ) 2201
Table 4. The empirical bias and standard error (s.d.) of the estimators π ^ 1 , β ^ 1 , and β ^ 2 based on 100 replications in Example 2.
Table 4. The empirical bias and standard error (s.d.) of the estimators π ^ 1 , β ^ 1 , and β ^ 2 based on 100 replications in Example 2.
( n , m ) π 1 β 1 β 2
β 11 β 12 β 13 β 21 β 22 β 23
( 400 , 3 ) True0.5 0.5 1 2 0.512
Bias0.0068 0.0173 0.0395 0.0663 0.0124 0.0431 0.1016
s.d.0.03130.25850.24830.29800.11740.16030.2289
( 600 , 2 ) True0.5 0.5 1 2 0.512
Bias 0.0048 0.0253 0.0467 0.0983 0.0299 0.0528 0.1121
s.d.0.03110.24150.26170.29210.15250.18400.2219
Table 5. The sample mean, median, and standard error (s.d.) of M ^ and the empirical percentage (per) of M ^ equal to the true number of subgroups M based on 100 replications in Example 3.
Table 5. The sample mean, median, and standard error (s.d.) of M ^ and the empirical percentage (per) of M ^ equal to the true number of subgroups M based on 100 replications in Example 3.
( n , m ) MeanMedians.d.Per
n = ( 200 , 4 ) 1101
n = ( 400 , 2 ) 1101
Table 6. The empirical bias and sample standard error (s.d.) of the estimator β ^ 11 , β ^ 12 and Λ ^ 0 ( 8 ) based on 100 replications in Example 3.
Table 6. The empirical bias and sample standard error (s.d.) of the estimator β ^ 11 , β ^ 12 and Λ ^ 0 ( 8 ) based on 100 replications in Example 3.
( n , m ) β 11 β 12 Λ 0 ( 8 )
( 200 , 4 ) True134
Bias0.0021 0.0074 0.0710
s.d.0.11800.23270.63830
( 400 , 2 ) True134
Bias0.00990.0166 0.2143
s.d.0.14470.23480.6849
Table 7. Estimation results for migraine data with M = 1 , 2 , 3 , 4 : the number of subgroups (M), the maximum log-likelihood values (LL), the BIC values (BIC), and the estimated parameters.
Table 7. Estimation results for migraine data with M = 1 , 2 , 3 , 4 : the number of subgroups (M), the maximum log-likelihood values (LL), the BIC values (BIC), and the estimated parameters.
MLLBICEstimated Parameters
1 6600.132 13,218.23 β ^ = ( 0.5325 , 0.1151 )
2 6579.397 13,194.72 π ^ 1 = 0.2003 , β ^ 1 = ( 0.3968 , 0.1302 )
π ^ 2 = 0.7997 , β ^ 2 = ( 0.6769 , 0.2541 )
3 6559.832 13,173.55 π ^ 1 = 0.1881 , β ^ 1 = ( 0.9552 , 0.1364 )
π ^ 2 = 0.0383 , β ^ 2 = ( 0.3508 , 0.5495 )
π ^ 3 = 0.7736 , β ^ 3 = ( 0.4493 , 0.2817 )
4 6597.269 13,266.39 π ^ 1 = 0.0211 , β ^ 1 = ( 0.3509 , 0.5877 )
π ^ 2 = 0.0193 , β ^ 2 = ( 1.1539 , 0.3783 )
π ^ 3 = 0.7893 , β ^ 3 = ( 0.2949 , 0.2022 )
π ^ 4 = 0.1703 , β ^ 4 = ( 0.5758 , 0.1578 )
Table 8. The estimated parameters for the optimal model.
Table 8. The estimated parameters for the optimal model.
ParametersSE95% Bootstrap CI 95% Bootstrap CI
π 1 0.0307[0.1174, 0.2379][0.1116, 0.2374]
π 2 0.0044[0.0231, 0.0404][0.0232, 0.0406]
π 3 0.0301[0.7315, 0.8495][0.7058, 0.8308]
β 11 0.2428[ 1.3256 , 0.3735 ][ 1.3017 , 0.5147 ]
β 12 0.1384[ 0.1210 , 0.4218][ 0.1386 , −0.1386]
β 21 0.0613[0.2426, 0.4829][0.2412, 0.4811]
β 22 0.0571[0.4279, 0.6518][0.4266, 0.6503]
β 31 0.0584[ 0.5606 , 0.3314 ][ 0.5538 , 0.3549 ]
β 32 0.0622[ 0.3997 , 0.1557 ][ 0.4247 , 0.1747 ]
Notes: SE, the empirical standard error based on the bootstrap samples; CI , normal-based bootstrap CI; CI , percentile bootstrap CI.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, X.; Xu, J. Subgroup Identification and Regression Analysis of Clustered and Heterogeneous Interval-Censored Data. Mathematics 2022, 10, 862. https://doi.org/10.3390/math10060862

AMA Style

Huang X, Xu J. Subgroup Identification and Regression Analysis of Clustered and Heterogeneous Interval-Censored Data. Mathematics. 2022; 10(6):862. https://doi.org/10.3390/math10060862

Chicago/Turabian Style

Huang, Xifen, and Jinfeng Xu. 2022. "Subgroup Identification and Regression Analysis of Clustered and Heterogeneous Interval-Censored Data" Mathematics 10, no. 6: 862. https://doi.org/10.3390/math10060862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop