Next Article in Journal
Combinatorial Estimations on Burnside Type Problems
Previous Article in Journal
Energy Management for Hybrid Electric Vehicles Using Safe Hybrid-Action Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model Selection Path and Construction of Model Confidence Set under High-Dimensional Variables

1
Frontiers Science Center for Nonlinear Expectations (Ministry of Education), Research Center for Mathematics and Interdisciplinary Sciences, Shandong University, Qingdao 266237, China
2
School of Mathematics, Shandong University, Jinan 250100, China
3
Department of Statistics, University of California, Davis, CA 95616, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 664; https://doi.org/10.3390/math12050664
Submission received: 28 December 2023 / Revised: 17 February 2024 / Accepted: 22 February 2024 / Published: 24 February 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
Model selection uncertainty has drawn a lot of attention from academics recently because it significantly affects parameter estimation and prediction. Scholars are currently addressing and quantifying uncertainty in model selection by concentrating on model combining and model confidence sets. In this paper, we present a new approach for building model confidence sets, which we call AMac. We provide a theoretical lower bound on the degree of confidence in the model confidence sets that AMac has built. Furthermore, we discuss how the implementation of current model confidence set construction methods becomes difficult when dealing with high-dimensional variables. To address this problem, we suggest building model selection paths (MSP) as a solution. We develop an algorithm for building MSP and show its effectiveness by utilizing the theories of adaptive lasso and lars. We perform an extensive set of simulation experiments to compare the performances of Mac and AMac methods. According to the results, AMac is more stable when there are fluctuations in noise levels. The model confidence sets built by AMac, in particular, achieve coverage rates that are closer to the desired confidence level, especially in the presence of high noise levels. To further confirm that MSP can successfully generate model confidence sets that maintain the given confidence level as the sample size increases, we conduct extensive simulation tests with high-dimensional variables. Ultimately, we hope that the strategies and concepts discussed in this work will improve results in subsequent research on the uncertainty of model selection.

1. Introduction

When conducting modeling and analysis on a dataset in real-world applications, data analysts frequently run into several candidate models. In such situations, the conventional method is to use an appropriate model selection procedure to choose the best model among the options based on specific criteria [1]. Even though the theory and methodology for model selection have advanced significantly [2], it still presents difficulties when data are sparse. Scholars have focused a great deal of attention on uncertainties that result from even minor perturbations or changes in the data, which can significantly influence the model selection results [3,4]. When modeling and analyzing real data, it is important to measure or improve the uncertainty associated with model selection to avoid overidealized estimation or prediction results resulting from ignoring the uncertainty in the model selection process [5,6].
To deal with the uncertainty in model selection, previous researchers mostly used the model combining (or averaging) strategy. By using the models’ posterior probabilities or bootstrap weights, this method quantifies the uncertainty in the model selection process and reduces the effect of data perturbations by combining candidate models in a weighted manner. Adaptive regression by mixing (ARM) [7,8] and conventional Bayesian model averaging (BMA) [9,10] are two examples of this technique. Model combining techniques improve estimation reliability, but they are frequently computationally expensive and complex and produce interpretability-poor results [11]. The model confidence set (MCS) was first proposed by Hansen et al. in 2011 [12]. It is a set of potential models, among which is the “best model” with a specified degree of confidence. Like confidence intervals in parameter estimation, MCS offers a measure of uncertainty in model selection. MCS has an advantage in that it recognizes the constraints of the data. MCS typically uses fewer models when there is a wealth of data available. Conversely, in situations where there is a dearth of information in the data, MCS will incorporate more models to offer more useful information, leading to a more trustworthy analysis afterwards [13]. Because model confidence sets are easy to evaluate and use dependable sample data, more academics are developing them to quantify the uncertainty of model selection.
Following the introduction of the notion of MCS, Hansen et al. introduced a construction technique using a sequence of exclusion criteria and equivalency tests [12]. Variable select confidence sets (VSCSs) are the result of Ferrari et al.’s (2015) extension of the MCS concept to the linear regression model [14]. They offered a technique for choosing significant variables by defining lower boundary models (LBMs) utilizing particular subsets of VSCS and proposing an F-test-based VSCS creation method. Zheng et al. (2019a) presented a technique for building VSCS utilizing likelihood ratio tests (LRT) and implemented it in the biomedical research domain [15]. Zheng et al. (2019b) conducted the first application of logistic regression models using LRT and the LBM technique, analyzing the age-related macular degeneration (AMD) dataset in the same year [16]. Liu et al. (2021) presented a consistency model selection criterion and bootstrap-method-based MCS construction approach. They called this procedure Mac since it is crucial to Make A Cut after ordering alternative models [17]. This study, motivated by model combining and Mac, refines the Mac method from the standpoint of model averaging. We call this approach average Mac (AMac). AMac has a greater model coverage rate and is less impacted by noise than Mac.
There is an increasing quantity of high-dimensional data in both everyday life and scientific study due to the quick growth of information technology and advancements in data gathering, storage, and processing [18]. Investigating how to apply VSCS and MCS building techniques to high-dimensional data takes on great significance. To the best of our knowledge, however, scenarios in which there are a lot of candidate models because of an increase in variables are beyond the scope of the widely used approaches (MCS, LBM, LRT, Mac) for building model confidence sets. While Ferrari et al. suggested using various variable selection techniques to screen variables in the beginning to narrow down the number of candidate models while building VSCSs [14], this method still has computational issues when there are an excessive number of true variables. Li et al. [19] (2019) proposed the MCB concept, which quantifies the uncertainty of variable selection by providing lower and upper bound sets for variables. They also introduced a bootstrap-based method for constructing MCB within a linear model framework. Recently, Li et al. [20] (2023) enhanced the MCB algorithm by introducing NMCS. They investigated the performance of MCB and NMCS in high-dimensional variable scenarios and extended the NMCS algorithm to encompass generalized linear models. It should be noted that, although MCB is less affected by variable increases, it differs from MCS in terms of definition. While MCS focuses on model selection, MCB emphasizes variable selection. It is worth noting that the key to constructing MCS lies in selecting good models rather than selecting good variables. In light of this, the study presents the idea of model selection path (MSP) to solve the problem of an excessive number of alternative models arising from variables with high dimensions. The filtered alternative model set is what MSP is. By converting the exponential growth of candidate models with variables into a linear connection, it eliminates the issue of an excessive number of alternative models brought on by variable increases. It is possible to effectively create MCS under large dimensional variables by combining MSP with the standard MCS creation approach. We present the construction approach of MSP under high-dimensional variables in the context of the linear regression model. Using simulations, we discovered that the average coverage rate of MCS may be greatly increased by combining the MSP method with traditional MCS building techniques.
The three goals that this essay aims to achieve are as follows: First, we propose the AMac method that generates MCS by combining Mac with model averaging. Additionally, we provide the theoretical lower bound on the AMac-established MCS confidence level. Second, we suggest the construction of MSP as a solution to the computational challenges faced while building MCS with high-dimensional variables. By merging the lars [21] algorithm with the adaptive lasso [22] approach, we provide a specific methodology for building MSP and theoretically demonstrate its efficacy. Finally, to evaluate the finite sample properties of the MSP technique and the AMac method presented in this work, we carry out comprehensive simulation tests. We also utilize our techniques on the “diabetes” dataset.
The sections are arranged as follows: The AMac approach for building MCS is presented in the first part of Section 2, along with the concrete algorithm for building MSP in the second part. In Section 3, the first section concentrates on presenting the lower bound for the MCS confidence level that AMac constructed, while the second half establishes MSP’s validity. The AMac algorithm’s weight selection process is introduced in Section 4. In Section 5, numerous experimental simulations are carried out to confirm the finite sample properties of the suggested approach. We use the techniques discussed in this article on the “diabetes” dataset in Section 6. Section 7 concludes this essay with more discussion and an overview of the techniques and findings used.

2. Methods

2.1. AMac: Constructing MCS

Given a set of samples ( X , Y ) and a set of candidate models M = { M 1 , , M N } , where Y = ( y 1 , y 2 , , y n ) , X = ( x 1 , x 2 , , x p ) , with x i = ( x i 1 , x i 2 , , x i n ) . We assume that the sample is generated by the true model ( M o p t , ψ o p t ) , where ψ o p t is the real model parameter and the probability of M o p t being in the model candidate set is P ( M o p t M ) = 1 .
The objective is to determine which model ( M ) is most likely to have produced the ( X , Y ) data. To achieve this, we utilize a criterion function f M ( X , Y ) to differentiate between the different models in M = { M 1 , , M N } . An example of criterion functions is the BIC criterion function. In general, the model M is more likely to be M o p t based on the criterion f if f M ( X , Y ) is less. Typically, the model with the lowest value of f is chosen as M o p t . However, due to incomplete data and the inherent uncertainty of the criterion f, this approach may not always produce satisfactory findings [23]. To solve this problem, the idea is to identify several models from M that together make up the model set M , such that M has the desired probability to cover the true model M o p t . The mathematical representation of this is as follows:
P ( M o p t M ) 1 α .
Here, α is a predetermined significance level, and M represents the model confidence set (MCS) with a confidence level of 1 α . To minimize the number of models in M , we need to select those models with a higher probability of being M o p t to form M .
Assuming that a smaller value of f denotes a superior model, we can begin by computing the criteria values for each model in M , denoted as f M i ( X , Y ) , i = 1 , , N . Based on the criteria values, we can then arrange the models in M in ascending order. Let the model be rearranged by the following:
M f = { M ^ 1 0 , M ^ 2 0 , , M ^ N 0 } .
We have the following:
f M ^ 1 0 ( X , Y ) f M ^ 2 0 ( X , Y ) f M ^ N 0 ( X , Y ) .
According to our assumption, we expect to have the following:
P ( M o p t = M ^ 1 0 ) P ( M o p t = M ^ 2 0 ) P ( M o p t = M ^ N 0 ) .
If Equation (2)  holds under the criterion f, we can attempt to form a model confidence set M by choosing the first k models, where M = { M ^ 1 0 , M ^ 2 0 , , M ^ k 0 } and M satisfy (1). This is the rationale for the Mac approach [17], which entails making a cut (Mac) after determining the odds that the sorted sequential models will equal the correct model.
In real-world applications, it is frequently difficult for the criterion f to completely meet (2). The models can be sorted in descending order based on these probabilities, though if we can calculate the probabilities in Equation (2), then we obtain the following:
M f f = { M ˜ 1 0 , M ˜ 2 0 , , M ˜ N 0 } ,
and we have the following:
P ( M o p t = M ˜ 1 0 ) P ( M o p t = M ˜ 2 0 ) P ( M o p t = M ˜ N 0 ) .
Then, we can select some models from M f f to form a model confidence set M that satisfies Equation (1). Let us assume that k 0 is the smallest integer that satisfies Equation (3), which is as follows:
P ( M o p t M ˜ 1 0 , M ˜ 2 0 , , M ˜ k 0 ) = P ( M o p t = M ˜ 1 0 ) + + P ( M o p t = M ˜ k 0 ) = l = 1 k P ( M o p t = M ˜ l 0 ) 1 α .
We obtain a minimal 1 α model confidence set for M o p t as follows:
M 1 α = M ˜ 1 0 , M ˜ 2 0 , , M ˜ k 0 .
It is clear that the crucial step in obtaining the minimal 1 α model confidence set for M o p t is to acquire the probability set:
P ( M o p t = M ^ 1 0 ) , P ( M o p t = M ^ 2 0 ) , , P ( M o p t = M ^ N 0 ) .
Referring to the Mac method, we can use the bootstrap method to simulate the probabilities in Equation (4)  [24]. Assuming that ( M o p t , ψ o p t ) is known, we can generate B sets of data Y [ b ] from ( M o p t , ψ o p t , X ) . By using the new data ( X , Y [ b ] ) and the criterion f, we can rank the candidate models to obtain M ^ l [ b ] for l = 1 , 2 , , N , and calculate I ( M ^ l [ b ] = M o p t ) , where I ( · ) represents the indicator function. Then, for any l 1 , , N , we have the following:
P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) P ( M o p t = M ^ l | M o p t , ψ o p t ) = 1 B b = 1 B I ( M o p t = M ^ l [ b ] | M o p t , ψ o p t ) + o p ( 1 ) .
Here, M ^ l represents the lth-ordered model after reordering the models in M under the bootstrap data. However, in reality, we do not know ( M o p t , ψ o p t ) . We only know that M o p t M ^ 1 0 , M ^ 2 0 , , M ^ N 0 . Therefore, according to the law of total probability, for any l 1 , , N , we have the following:
P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) = P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) × i = 1 N I ( M ^ i 0 = M o p t ) = P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) I ( M ^ 1 0 = M o p t ) + + P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) I ( M ^ N 0 = M o p t ) P ( M ^ 1 0 = M ^ l | M ^ 1 0 , ψ ^ 1 0 ) I ( M ^ 1 0 = M o p t ) + + P ( M ^ N 0 = M ^ l | M ^ N 0 , ψ ^ N 0 ) I ( M ^ N 0 = M o p t ) .
where ψ ^ i 0 , i = 1 , , N is the estimated parameter corresponding to the model M ^ i 0 .
It can be observed that the estimated probability P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) can be approximated as the weighted average of the bootstrap probabilities corresponding to each model, P ( M ^ 1 0 = M ^ l | M ^ 1 0 , ψ ^ 1 0 ) , , P ( M ^ N 0 = M ^ l | M ^ N 0 , ψ ^ N 0 ) . However, in this case, all the weights are concentrated on a single model M ^ i 0 , while the weights for other models M ^ j 0 , j = 1 , , N , j i are all zero. Note that Liu et al. proposed the Mac method [17], which approximates the estimated probability P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) using P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) P ( M ^ 1 0 = M ^ l | M ^ 1 0 , ψ ^ 1 0 ) . This is equivalent to setting I ( M ^ 1 0 = M o p t ) = 1 and I ( M ^ i 0 = M o p t ) = 0 , i 1 . This approximation yields good results only when the criterion f is relatively accurate (i.e., when the criterion f ranks the true model as the first one). Hence, the efficacy of the Mac method is strongly dependent on the accuracy of the criterion f. When there is an increase in data fluctuations or a decrease in the accuracy of the criterion f, the effectiveness of the Mac method diminishes significantly. To mitigate the impact of data fluctuations and the criterion f, we draw inspiration from model averaging and approximate the estimated probability using Equation (6):
P ( M o p t = M ^ l 0 | M o p t , ψ o p t ) P ( M ^ 1 0 = M ^ l | M ^ 1 0 , ψ ^ 1 0 ) a 1 ( M ^ 1 0 ) + + P ( M ^ N 0 = M ^ l | M ^ N 0 , ψ ^ N 0 ) a N ( M ^ N 0 ) ,
where l { 1 , , N } , a l ( M ^ l 0 ) is associated with model M ^ l 0 , and i = 1 N a i ( M ^ i 0 ) = 1 . When substantial data fluctuations occur, causing the criterion f to make incorrect judgments, utilizing information from multiple models in Equation (6) mitigates the impact of the criterion f misjudgments. This leads to a more effective approximation.
It can be seen that, given a i ( M ^ i 0 ) , i = 1 , , N , the key to calculating the probabilities in Equation (4)  is to simulate { P ( M ^ 1 0 = M ^ l | M ^ 1 0 , ψ ^ 1 0 ) , , P ( M ^ N 0 = M ^ l | M ^ N 0 , ψ ^ N 0 ) , l = 1 , , N } . Equation (6)  may then be used to compute the probabilities in Equation (4). The calculated probabilities will then be sorted in descending order to obtain the following results: { P ( M o p t = M ˜ 1 0 ) , P ( M o p t = M ˜ 2 0 ) , , P ( M o p t = M ˜ N 0 ) } . In the end, we find the required value of k using Equation (3), and as a result, we obtain a minimum 1 α confidence set for M o p t as { M ˜ 1 0 , M ^ 2 0 , , M ˜ k 0 } . The specific algorithm for constructing MCS using AMac is shown in Algorithm 1.
The key of Algorithm 1 is to make a cut on the model sequence M ˜ 1 0 , M ˜ 2 0 , , M ˜ N 0 according to bootstrap probabilities. Considering that our method combines the idea of model averaging, we name this method average Mac (AMac).
Algorithm 1 Constructing MCS using AMac
1:
Using the data ( X , Y ) and a criterion f, perform parameter estimation and ranking of the models to obtain ordered models M f = { M ^ 1 0 , , M ^ N 0 } and their corresponding parameters ( M ^ i 0 , ψ ^ i 0 ) , i = 1 , , N .
2:
Choose a set of probability weights a i for models M ^ i 0 , i = 1 , , N , satisfying i = 1 N a i = 1 .
3:
Keep the data X unchanged, and generate new data Y i [ b ] , b = 1 , , B , i = 1 , , N under the models ( M ^ i 0 , ψ ^ i 0 ) , i = 1 , , N , respectively.
4:
Calculate the values of f M ( X , Y ) for each candidate model in the candidate model set M using ( X , Y i [ b ] ) and sort them in ascending order:
f M ^ i , 1 [ b ] ( X , Y i [ b ] ) f M ^ i , 2 [ b ] ( X , Y i [ b ] ) f M ^ i , N [ b ] ( X , Y i [ b ] ) .
Resulting in an ordered sequence of models: M ^ i , 1 [ b ] , M ^ i , 2 [ b ] , , M ^ i , N [ b ] .
5:
Repeat steps 3 and 4 for i = 1 , 2 , , N and b = 1 , 2 , , B .
6:
Calculate the empirical probabilities of M ^ i , 1 = M ^ i 0 , M ^ i , 2 = M ^ i 0 , , M ^ i , N = M ^ i 0 :
P ( M ^ i , l = M ^ i 0 ) = 1 B b = 1 B I ( M ^ i , l [ b ] = M ^ i 0 ) l , i = 1 , , N .
7:
According to Equation (6), calculate for any l { 1 , , N } :
P ( M ^ l 0 = M o p t ) P ( M ^ l = M o p t )   P ( M ^ 1 , l = M ^ 1 0 ) a 1 + P ( M ^ 2 , l = M ^ 2 0 ) a 2 + + P ( M ^ N , l = M ^ N 0 ) a N .
Obtain
P ( M f ) = P ( M o p t = M ^ 1 0 ) , P ( M o p t = M ^ 2 0 ) , , P ( M o p t = M ^ N 0 ) .
8:
Based on the resorting of candidate models in descending order of P ( M f ) , we obtain
M f f = { M ˜ 1 0 , M ˜ 2 0 , , M ˜ N 0 } .
9:
By calculating
l = 1 k P ( M o p t = M ˜ l 0 ) 1 α ,
we determine the minimum value of k, and the 1 α confidence set for M o p t is M = M ˜ 1 0 , M ˜ 2 0 , , M ˜ k 0 .

2.2. MSP- {* }: Constructing MCS under High-Dimensional Variables in Linear Regression Model

Many explanatory variables can result in an overly large candidate model set when dealing with high-dimensional variables. In the case of the linear regression model, for instance, there are N = 2 p candidate models if the explanatory variables X have dimension p. If the square and interaction terms of the explanatory variables are considered, the number of candidate models N will be larger. When N is large, AMac encounters two problems: (1) The first step of AMac, which involves sorting based on criterion f M ( X , Y ) , becomes computationally infeasible. (2) Estimating the probability that each model in M is the true model using bootstrap is challenging. To the best of our knowledge, the current approaches for creating model confidence sets (MCS, LRT, LBM, BMS, Mac) all have unreasonably high computing costs when handling a large number of candidate models. Therefore, it is particularly important to propose an effective method for reducing the number of candidate models, especially when there are many explanatory variables.
Davide Ferrari [14] suggested using variable selection methods such as lasso [25] and SCAD [26] to reduce the number of variables and then applying the LBM method to construct MCS. However, for two key reasons, this strategy does not successfully tackle the issue of an excessive number of candidate models induced by high-dimensional variables. First, although the first step reduces the dimensionality p through variable selection methods, the reduced number of variables p r may still be relatively large, especially when the number of true variables itself is large. Second, it is improbable that the majority of existing variable selection techniques will cover every real variable. As a result, some real variables have likely been left out. As a result, the true model is no longer included in the model set that was created using the variables that were chosen, which goes against the assumption made by the model confidence set that the true model is included in the candidate model set.
We have observed that when constructing MCS, the selection of well-performing models is more crucial than the selection of variables. Motivated by this, we aim to efficiently reduce the number of candidate models by selecting high-performing models from the candidate model set directly to establish a new set. To achieve this, we aim to narrow down the initial candidate model set M to M r while ensuring that M r satisfies the following two properties:
Property 1.
The number of candidate models in M r is relatively small and does not significantly increase with the increase in p.
Property 2.
M r should satisfy P ( M o p t M r ) = 1 , or at least ensure this property when the sample size is sufficiently large, i.e., lim n + P ( M o p t M r n ) = 1 .
We refer to the model set M r that satisfies these two properties as the model selection path (MSP). Property 1 ensures that after constructing MSP, the computational burden of constructing MCS through MSP in the second step remains manageable. Property 2 requires that the true model is included in MSP, serving as a prerequisite for constructing MCS through MSP in the second step. Next, we will present the specific method for constructing MSP in the context of the linear regression model.
Assuming that the data follow a linear regression model,
Y = X β + ϵ ,
where ( Y , X ) is defined as previously mentioned, β = ( β 1 , , β p ) represents the coefficient vector of the independent variables, and ϵ = ( ϵ 1 , , ϵ n ) is the random error vector. We assume that the errors ϵ i are independently and identically distributed random variables with a mean of 0 and a variance of σ 2 . Additionally, we assume that 1 n X T X C , where C is a positive definite matrix. Our proposed solution is to construct MP by utilizing the “solution path” [27] obtained from the adaptive lasso (Alasso) method [22]. This approach allows us to significantly reduce the number of candidate models.
Let us start by briefly introducing the relevant knowledge about Alasso and lars [21]. Assuming that β ^ is the n -consistent estimator of β (in this paper, we will use the least squares estimator β ^ ( o l s ) ), choose γ > 0 (in this paper, we choose γ = 1 ) and define the weight vector w ^ = 1 / | β ^ | γ . Note that β ^ ( o l s ) and γ = 1 are commonly used parameters by scholars to address issues with adaptive lasso. These parameter values are also the default settings in the adaptive lasso code available on the web (http://www4.stat.ncsu.edu/~boos/var.select/lasso.adaptive.html) (accessed on 20 July 2023). Furthermore, it is important to note that the specific value of γ within the range of ( 0 , ) does not affect the large sample properties of MSP constructed by Algorithm 2 (see the proof in Appendix A.2 for details.). For the sake of simplicity, we have set γ = 1 in Algorithm 2.
The Alasso estimator β ^ ( n ) is defined as follows:
β ^ ( n ) = arg min β | | Y X β | | 2 + λ n j = 1 p w ^ j | β j | ,
where λ n is a tuning parameter. Let A = { j , β j 0 } represent the set of variables with nonzero true coefficients and A n = { j , β ^ j ( n ) 0 } represent the set of variables with nonzero coefficients in the Alasso estimation results. According to Zou’s proof [22], as the sample size n tends toward infinity, we can choose an appropriate λ n such that lim n + P ( A n = A ) = 1 . In this case, the model constructed based on the nonzero variables determined by this λ n is the true model. Therefore, if we consider all the models represented by the solutions of Equation (7)  with λ n [ 0 , inf ) and form a model set M r , then as the sample size n tends toward infinity, M r will, with a probability of 1, contain the true model. It is known that the lars algorithm can provide all solutions of the Alasso with minimal computational effort. Therefore, we propose using the lars algorithm to generate the solution path of the Alasso, and we use the models M i , i = 1 , , N formed by the nonzero variables in all solutions on the path to form M r , where N represents the number of steps required by the lars–Alasso algorithm to obtain the Alasso solution path. Specifically, the construction method of the proposed MSP is presented in Algorithm 2.
Algorithm 2 Alasso–lars (AL): the construction algorithm for MSP
1:
Compute the least squares estimate β ^ = ( β ^ 1 , , β ^ p ) under the full variable set, and calculate w ^ j = 1 / | β ^ j | , j = 1 , , p .
2:
Define x j = x j / w ^ j , j = 1 , 2 , , p .
3:
Solve the lasso problem shown in Equation (8) using the lars–lasso algorithm:
β ^ = arg min β | | Y X β | | 2 + λ n j = 1 p | β j | .
4:
Assuming that the lars–lasso algorithm takes a total of N steps to solve Equation (8), save the model M i consisting of nonzero variables at each step and let M r = { M i , i = 1 , , N } = { M l , l = 1 , , N } be the new set of candidate models. Here, M l , l = 1 , , N are mutually distinct models.
Algorithm 2 essentially leverages the consistency of Alasso in variable selection and the piecewise linearity of the solution to the Alasso problem with respect to λ n in a linear regression model [28], which can be efficiently computed using the lars algorithm to obtain its solution path. In Theorem 2, we will provide a proof of the effectiveness of MSP constructed by Algorithm 2.

3. Theoretical Properties

3.1. Coverage Rate of MCS Constructed by AMac

Let Y denote the original data and Y i [ b ] , 1 b B , 1 i N be the data generated through bootstrap under ( M ^ i 0 , ψ ^ i 0 , X ) , where ψ ^ represents the estimate of ψ . The probability of event · occurring when the underlying distribution is determined by ( M , ψ ) is denoted as P ( · | M , ψ ) . We propose the following hypotheses:
Hypothesis 1.
The model space, M , is finite.
Hypothesis 2.
The bootstrap samples Y i [ b ] , 1 b B , 1 i N are generated independently under ( M ^ i 0 , ψ ^ i 0 ) .
Hypothesis 3.
There is a constant c such that, for every k 0 and fixed vector ψ ^ o p t , we have
P ( M o p t M ^ 1 , , M ^ k | M o p t , ψ ^ o p t ) P ( M o p t M ^ 1 , , M ^ k | M o p t , ψ o p t ) c ψ ^ o p t ψ o p t .
Hypothesis 4.
P ( M ^ i , l = M ^ i 0 | M ^ i 0 , ψ ^ i 0 ) > 0 , 1 i , l N .
Assumptions H1–H4 are similar to those proposed by Liu et al. for constructing the lower bound of Mac [17]. However, the key distinction is that Mac’s assumptions are specifically tailored to the model ranked first according to the criterion f. In contrast, the assumptions in this paper apply to the set of alternative models and are not dependent on the initial ranking based on the criterion f.
For any given set of non-negative weights a i 0 , i = 1 , 2 , , N satisfying i = 1 N a i = 1 , let k denote the smallest value of k that satisfies Equation (9):
l = 1 k i = 1 N P i ( M ˜ l = M ^ i 0 ) a i = l { l 1 , l 2 , , l k } i = 1 N P i ( M ^ l = M ^ i 0 ) a i = l { l 1 , l 2 , , l k } i = 1 N 1 B b = 1 B I ( M ^ i , l [ b ] = M ^ i 0 ) a i 1 α .
where P i ( · ) = P ( · | M ^ i 0 , ψ ^ i 0 ) . Assuming that H1–H4 hold, we can deduce that the model confidence set constructed by Algorithm 1 has a lower confidence bound expressed as Equation (10).
Theorem 1.
Assuming that H1–H4 hold, k is determined by Equation (9). As B , k converges in probability to some integer k 0 , and this convergence probability depends on the bootstrap distribution under ( M ^ i 0 , ψ ^ i 0 ) . Furthermore, for this integer k, assuming i = 1 N a i P ( M ^ i 0 = M o p t ) > 0 , we have
P ( M o p t M ˜ 1 0 , , M ˜ k 0 ) 1 α i = 1 N a i P ( M ^ i 0 M o p t ) i = 1 N c E ψ ^ o p t ψ o p t a i I ( M ^ i 0 = M o p t ) o ( 1 ) i = 1 N a i P ( M ^ i 0 = M o p t ) .
where o ( 1 ) goes to 0 as B .
By observing Equation (10), the coverage probability of AMac is greatly influenced by the weights a = ( a 1 , , a N ) . In Section 4, we will present a method for selecting the weights.

3.2. The Effectiveness of Constructing MSP Using the Alasso–Lars Algorithm

We will now illustrate that the candidate model set M r constructed through Algorithm 2 indeed possesses Properties 1 and 2 of MSP.
For Property 1, let N represent the total number of steps in the Alasso–lars algorithm. Although determining the exact value of N for any given dataset can be challenging, empirical evidence from Rosset et al. suggests that N = O ( p ) is typically valid [28]. Let N = | M r | be the number of candidate models in the set M r . It follows that N N , and N = N only when the lars–lasso algorithm generates distinct M i for each step i = 1 , , N . When p is relatively large, N N = O ( p ) is typically much smaller than 2 p , indicating that the number of candidate models in M r is relatively small. Since N is of the same order as p, it does not significantly increase with the increase in p. In conclusion, the set M r constructed by Algorithm 2 satisfies Property 1.
Regarding Property 2, we have Theorem 2, which guarantees that when the sample size is sufficiently large, the set M r constructed by Algorithm 2 will indeed, with probability 1, contain the true model.
Theorem 2.
Let M be the initial set of candidate models, M o p t be the true model, and M r be the set of candidate models constructed by Algorithm 2. Then, we have lim n + P ( M o p t M r ) = 1 .
In conclusion, the set of candidate models M r constructed by Algorithm 2 satisfies both Property 1 and Property 2 of MSP. Consequently, it can be effectively integrated with existing MCS construction methods to tackle the problem of an overwhelming number of candidate models.

4. Weight Selection

The selection of weights in this paper follows the approach of model averaging, as suggested by Claeskens and Hjort [6].
a i ( M i ) = e x p ( f M i ( X , Y ) ) i = 1 N e x p ( f M i ( X , Y ) ) .
In particular, when using the BIC criterion for f M i ( X , Y ) , the expression is given by the following:
f M i ( X , Y ) = log log n × B I C M i .
Here, n represents the sample size and B I C M i denotes the BIC value of the model M i . The term log log n in this formula is an empirical value.
When there are a large number of candidate models, bootstrapping the distribution of all models in AMac can be time-consuming. Considering that the weights of most candidate models are close to zero, it is unnecessary to perform bootstrap simulations for models with extremely low probabilities. Instead, it is sufficient to focus on performing bootstrap simulations only for the top K models with relatively higher weights. For the selection of K, assuming that the weights ( a 1 , a 2 , , a N ) have already been sorted in descending order, we provide the empirical formula given by Equation (11),
i = 1 K a i 1 α ,
where α is a predetermined significance level. After calculating K using Equation (11), we normalize ( a 1 , a 2 , , a K ) and set ( a K + 1 , a K + 2 , a N ) to 0. Through experimental comparison, we have observed that implementing bootstrap simulations on the top K models yields probability distributions that are highly similar to those obtained by bootstrapping all models. Therefore, to reduce simulation time, we adopt this approximation of weights in the implementation of AMac.

5. Simulation

In this section, we validate the numerical performance of AMac and M S P { } , where { } represents the baseline MCS construction method. Since the MSP in this study is constructed using the Alasso–lars algorithm, we use A L { } instead of M S P { } in the following text. Considering that Liu et al. have already compared Mac with MCS, LRT, LBM, and BMS in their study [17], this paper will exclusively compare AMac with Mac.

5.1. Simulated Performance of AMac and Mac

Referring to the simulation settings of Mac in Liu et al., we consider a multiple linear regression model:
y i = β 0 + β 1 x 1 i + β 2 x 2 i + β 3 x 2 i 2 + ϵ i i = 1 , 2 , , n ,
where x 1 i , i = 1 , , n are generated from the Bernoulli(0.5) distribution, and x 2 i , i = 1 , , n are generated from the N ( 0 , 1 ) distribution. The x’s are then fixed throughout the simulation. The errors ϵ i , i = 1 , , n are generated from N ( 0 , σ 2 ) . In this simulation setting, there are three explanatory variables ( x 1 i , x 2 i , x 2 i 2 ) , and the total number of candidate models is 2 3 = 8 . The true model’s coefficient vector is set as β = ( 1 , 1 , 1 , 0 ) . That is, the real model is y i = 1 + x 1 i + x 2 i + ϵ i , i = 1 , 2 , , n . In the simulation, different sample sizes of n = 100 , 125 , 150 , 175 , 200 . The noise standard deviation is varied as σ = 1.0 , 2.0 , 3.0 . The number of bootstrap iterations is set to B = 400 , and the total number of simulations is T N S = 25,000 times.
To assess the effectiveness of various techniques, we take into account four MCS features: the empirical coverage probability (ECP) of MCS at a specific confidence level, such as 90% or 95% (that is, α = 0.1 or α = 0.05 ); MCS’s average model count (AM), which is comparable to a confidence interval’s length; the coefficient of variation (CV) of the model count K in MCS, which assesses the stability of various techniques; and the average coverage rate of individual models (ECP/AM), which is used to assess the models’ average efficacy in MCS. In the simulation, the model selection criterion f is via the Bayesian information criterion (BIC) [29]:
B I C ( M ) = n log σ ^ 2 + d f ( M ) × log n ,
where M represents a candidate model; σ ^ 2 is the SSE (sum of squares of residuals) divided by n, which is the MLE of σ 2 under M; and d f ( M ) represents the degrees of freedom for the model M.
Table 1 presents the simulation results of AMac and Mac under different parameter settings, where the first column l e v e l represents the confidence level and the second column n represents the sample size. It can be observed that, overall, AMac and Mac exhibit similar properties. First, as the sample size n increases, ECP continuously increases and approaches 1, and AM decreases continuously and approaches 1. This occurs because the model space is discrete, and as n increases, the accuracy of the criterion f improves until it can fully determine the unique true model. Second, as the noise level σ decreases, ECP continuously increases and approaches 1, and AM decreases and approaches 1. This is because the magnitude of noise also affects the accuracy of the criterion f, and lower noise levels result in an improved accuracy of f. Third, the coefficient of variation C V for AMac is generally smaller than that of Mac, indicating that AMac has better stability.
To facilitate a more intuitive comparison of the simulation performance of AMac and Mac, we have created boxplots depicting the variations of ECP and ECP/AM with respect to the sample size n and the noise level σ based on the data presented in Table 1. These graphical representations are displayed in Figure 1. Each boxplot contains 50 sets of ECP or ECP/AM, with each set of ECP or ECP/AM being calculated using 500 simulated data. On the left side, Figure 1a,c,e, respectively, represent the ECP of the two methods when σ = 1.0 , 2.0 , 3.0 . It can be observed that under different sample sizes and noise levels, AMac has a higher coverage rate compared with Mac, and this advantage becomes more pronounced as the noise level increases. On the right side, Figure 1b,d,f, respectively, represent the ECP/AM of the two methods when σ = 1.0 , 2.0 , 3.0 . It can be observed that when the noise is relatively small, Mac has a higher single-model average coverage rate compared with AMac. However, as the noise increases, AMac instead has a higher single-model average coverage rate. Finally, we have created a graph showing the changes in ECP with respect to σ for a sample size of n = 200 , as shown in Figure 2. Observing the slopes of the lines in Figure 2, it can be seen that the average rate of change of AMac with respect to sigma is smaller than that of Mac, indicating that AMac exhibits better stability in the face of noise variations compared with Mac. In summary, AMac surpasses Mac in terms of both higher overall coverage probability and better stability in the face of changes in noise.

5.2. Simulated Performances of AMac, Mac, AL-AMac, and AL-Mac

First, we would like to showcase the differences in computational speed among AMac, Mac, AL-AMac, and AL-Mac. Using simulation settings similar to those described in Section 5.1,
y i = β 0 + β 1 x 1 i + β 2 x 2 i + β 3 x 2 i 2 + β 4 x 3 i + β 5 x 3 i 2 + β p x j + ϵ i i = 1 , 2 , , n ,
where the value of j depends on p; x 1 i , i = 1 , , n are generated from the Bernoulli(0.5) distribution; and the rest, x j i , i = 1 , , n , are generated from the N ( 0 , 1 ) distribution. The x’s are fixed throughout the simulation. The errors ϵ i , i = 1 , , n are generated from N ( 0 , σ 2 ) . In this simulation setting, there are p explanatory variables ( x 1 i , x 2 i , x 2 i 2 , x 3 i , x 3 i 2 , , x j ) ; the total number of candidate models is 2 p . The true model’s coefficient vector is set as β = ( β 0 , β 1 , β 2 , , β p ) = ( 1 , 1 , 1 , 0 , , 0 ) . That is, the real model is y i = 1 + x 1 i + x 2 i + ϵ i , i = 1 , 2 , , n . In the simulation, we fixed the values of n = 100 , B = 400 , and T N S = 500 . We set p = 3 , 4 , , 20 and count the total time of 500 runs of the four methods. Since the running time of AMac and Mac methods increases rapidly as the number of variables increases, we limited the simulations to AMac with up to 8 variables and Mac with up to 9 variables. The running time of the four methods is shown in Figure 3.
From Figure 3, it can be observed that as p increases, the running time of the methods exhibits the following trend: AMac has the fastest growth rate, followed closely by Mac. Both methods experience exponential increases in running time. However, when the AL method is applied to both AMac and Mac, the running time is significantly reduced. The running time of AL-AMac and AL-Mac shows an approximate linear increase, indicating that an increase in the number of variables does not have a substantial impact on the computation of the A L { } methods.
In addition to computational speed, it is also important to compare the numerical performances of MCS constructed by different methods. Due to the significant increase in running time for Mac and AMac as the dimensionality increases, we will focus on comparing the numerical results of the methods for a variable dimension of p = 6 . The model setup is as follows:
y i = β 0 + β 1 x 1 i + β 2 x 2 i + β 3 x 2 i 2 + β 4 x 3 i + β 5 x 3 i 2 + β 6 x 4 i + ϵ i i = 1 , 2 , , n ,
where x 1 i , i = 1 , , n are generated from the Bernoulli(0.5) distribution; x 2 i , x 3 i , x 4 i , i = 1 , , n are generated from the N ( 0 , 1 ) distribution. The x’s are fixed throughout the simulation. The errors ϵ i , i = 1 , , n are generated from N ( 0 , σ 2 ) . In this simulation setting, there are 6 explanatory variables ( x 1 i , x 2 i , x 2 i 2 , x 3 i , x 3 i 2 , x 4 i ) ; the total number of candidate models is 2 6 = 64 . The true model’s coefficient vector is set as β = ( β 0 , β 1 , β 2 , , β p ) = ( 1 , 1 , 1 , 0 , , 0 ) . That is, the real model is y i = 1 + x 1 i + x 2 i + ϵ i , i = 1 , 2 , , n . Similar to Section 5.1, we set the sample size to n = 20 , 50 , 100 , 200 ; the number of bootstrap iterations to B = 400 ; the noise level to σ = 1.0 , 2.0 , 3.0 ; and the total number of simulations to T N S = 500 . The model selection criterion used is still the BIC criterion.
We still consider ECP, AM, ECP/AM, and CV as comparison metrics. Additionally, since the AL method is influenced by the MSP constructed by the first step Alasso, we also calculate the probability of the true model included in the MSP constructed by Alasso ( E C P A L ) as a comparison metric. The numerical results for the four methods at a confidence level of 90 % ( α = 0.1 ) are presented in Table 2.
From Table 2, we observe that, overall, AL-AMac, AL-Mac, AMac, and Mac exhibit similar properties. First, as the sample size n increases, ECP continually increases and approaches 1, and AM decreases and approaches 1. Second, as the noise level σ decreases, ECP increases and tends toward 1, and AM decreases and tends toward 1. Third, the CV for AL-AMac and AL-Mac is lower than that for AMac and Mac, indicating that the AL method exhibits better stability. Lastly, as the sample size n increases, the accuracy of the A L a s s o l a r s algorithm ( E C P A L ) tends toward 1, which aligns with Theorem 2.
To provide a more comprehensive comparison of the four methods, we have plotted the changes in ECP and ECP/AM with respect to the sample size n and noise level σ = 2.0 in Figure 4 based on the data in Table 2. From Figure 4, several observations can be made. First, AL-AMac and AL-Mac have lower ECP compared with AMac and Mac. This is because A L { } is a two-step method, and its ECPs is influenced by the coverage rate of MSP. Second, as the sample size increases, the ECP of AL-AMac and AL-Mac catch up with those of AMac and Mac quickly. It indicates the effectiveness of the AL method. Considering the relatively fast computation speed of the AL methods, they are recommended even in low-dimensional cases with a large sample size. Third, AL-AMac and AL-Mac have a significantly higher ECP/AM. This indicates that the MCS constructed by the AL methods contains fewer models.
In conclusion, when there are a large number of variables leading to the ineffectiveness of methods like AMac and Mac due to excessive computation, choosing AL methods is a more reasonable approach. Additionally, even when the number of variables is not high, selecting AL methods can still yield better results, particularly when dealing with a large sample size. This is evidenced by a shorter computation time, a smaller number of models in MCS, and higher average coverage rates.

5.3. Numerical Performance of AL in High-Dimensional Scenarios

Next, we will simulate the performance of AL-AMac and AL-Mac in the context of high-dimensional variables. The model is set as follows:
y i = β c o n s + β 0 x 0 j β 1 x 1 j + β 2 x 2 j + . + β p x p j + β p + 1 x 1 j 2 + β p + 2 x 2 j 2 + + β 2 p x p j 2 + ϵ j . j = 1 , 2 , , n ,
where x 0 j , j = 1 , , n are generated from Bernoulli(0.5); x i j , i = 1 , , p , j = 1 , , n are generated from N ( 0 , 1 ) . The x’s are fixed throughout the simulation. The errors ϵ i , i = 1 , , n are generated from N ( 0 , σ 2 ) . The true model’s coefficient vector is set as β = ( 1 , 1 , 1 , , 1 , 0 , , 0 , 1 , 1 , 0 , , 0 ) . The total number of variables is 2 p + 1 , with the number of true variables being 2 p t r u e + 1 , where p t r u e < p . That is, the total number of candidate models is 2 ( p + 1 ) and the true model is y i = 1 + x 1 i + x 2 i + + x p t r u e i + x 2 i 2 + + x p t r u e i 2 + ϵ i , i = 1 , 2 , , n . We set n = 100 , 200 , 400 ; B = 400 ; σ = 2.0 , T N S = 500 ; and the confidence levels to 90 % ( α = 0.1 ). The model selection criterion used is still the BIC criterion. In the simulation, we set the values of ( 2 p + 1 , 2 p t r u e + 1 ) as (11, 3), (21, 5), and (41, 9), respectively. Table 3 presents the simulation results for both methods.
Observing Table 3, several observations can be made. First, as the sample size increases, both AL-AMac and AL-Mac show a rapid increase in coverage rate, approaching 1. At the same time, the number of models in MCS decreases and also tends toward 1. This indicates that the AL method can achieve the predetermined coverage rate while maintaining a smaller “confidence interval” length. Second, AL-AMac has a higher ECP compared to AL-Mac. Third, in terms of the coefficient of variation (CV), AL-AMac exhibits better stability compared with AL-Mac. Finally, the E C P A L of MP obtained by the AL algorithm containing the true model tends to approach 1 as the sample size increases, which is consistent with Theorem 2.

6. Real-Data Example

Efron et al. analyzed the “diabetes” dataset using the lars algorithm [21]. The dataset consists of information from 422 patients, including 10 independent variables, which are age, sex, bmi, map, tc, ldl, hdl, tch, ltg, and glu, and a dependent variable, which is a measure of disease progression 1 year after the baseline. Liu et al. conducted a comparative analysis of Mac, MCS, LRT, LBM, and BMS using this dataset [17]. In this study, we also analyze the practical performance of Mac, AMac, AL-Mac, and AL-AMac using the same dataset. We sequentially assign the numerical labels 1 , , 10 to the aforementioned 10 variables in the dataset.
The MCSs constructed by the four methods are shown in Figure 5 at confidence levels of 90 % and 95 % ( α = 0.1 , 0.05 ). Several observations can be made. First, as the confidence level increases, the number of models included in the MCSs constructed by the four methods steadily increases. This is expected since higher confidence levels require a larger number of models to guarantee accuracy. Second, in comparison with AMac and Mac, the growth rate of the models included in the MCS created by AL-AMac and AL-Mac is slower as the confidence level rises. This is because many ineffective models are removed in the first step of the M P technique. Consequently, fewer models are needed to attain the target confidence level since the criteria f are more discriminating toward the remaining models. Third, the MCSs that AMac constructs are typically larger than those that Mac constructs, meaning that AMac normally offers more coverage.
According to the BIC criterion, the optimal model determined by the all-subset selection algorithm is (2,3,4,7,9), while the optimal model determined by the adaptive lasso algorithm is (2,3,4,5,6,8,9). From Figure 5, it can be observed that the model (2,3,4,7,9) is included in the MCS constructed by AMac, and the model (2,3,4,5,6,8,9) is included in the MCS constructed by AL-AMac and AL-Mac. This reflects that the results obtained by AMac and AL-{*} are reasonable. Based on the empirical simulations conducted by Liu et al., the models chosen by AMac and AL-AMac in this study, namely, (2,3,4,5,8,9) and (2,3,4,5,6,8,9), respectively, are both encompassed within the MCS constructed by the BMA and LBM methods. This indicates that the models selected by AMac and AL-AMac are of high quality. When analyzing this dataset, the running times of the Mac, AMac, AL-Mac, and AL-AMac methods are 499.27, 1642.57, 5.65, and 5.63 s, respectively, at a confidence level of 90% ( α = 0.1 ). This shows that the A L { } method can greatly reduce the time to analyze data. Of course, the effect becomes more obvious as the number of explanatory variables increases.

7. Discussion and Conclusions

Researchers now frequently use model combining and MCS to quantify the uncertainty of model selection. Our paper presents a novel approach, named AMac, that builds MCS by fusing the Mac algorithm with the notion of model averaging. Even in situations where noise levels fluctuate significantly, AMac maintains stability and operates efficiently. To further alleviate the computational difficulties that current MCS creation approaches face in high-dimensional variable settings, we propose the MSP method. By using comprehensive simulations, we verify the efficiency of the AL algorithm for building MSP.
The bootstrap method used in this study follows a fixed model approach, where the model parameters are estimated beforehand and then used to generate simulated data by repeatedly simulating the data based on the estimated parameters [11]. Additionally, bootstrap can also be implemented by perturbing the data, where the design matrix X remains unchanged, and perturbations are added to the original response variable Y to generate new simulated data [30]. Combining these two forms of bootstrap methods effectively to provide better simulation probabilities is an area that requires further research.
In practice, the uncertainty of model selection must be taken into account when working with a large number of candidate models for data analysis in order to obtain accurate parameter estimates or prediction outcomes. This research presents a way that further expands the use of model confidence sets, and it is anticipated that this method and its underlying concepts will work well in future model selection uncertainty assessments and model confidence set constructions.

Author Contributions

Conceptualization, F.W.; methodology, F.W., J.J. and Y.L.; software, F.W.; validation, F.W., J.J. and Y.L.; formal analysis, F.W., J.J. and Y.L.; investigation, F.W., J.J. and Y.L.; resources, F.W. and Y.L.; writing—original draft preparation, F.W.; writing—review and editing, J.J. and Y.L.; visualization, F.W.; supervision, J.J. and Y.L.; project administration, F.W., J.J. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Program of China, Grant No. 2018YFA0703900, and the National Science Foundation of China, Grant No. 11971264.

Data Availability Statement

All the data included in this study are available upon request from the corresponding author.

Acknowledgments

We are thankful to the reviewers for their constructive comments, which helped us to improve the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
MCSmodel confidence set
MPmodel path
Macmake a cut
AMacaverage make a cut
VSCSvariable select confidence sets
ECPempirical coverage probability
AMaverage model count
CVcoefficient of variation of the model count

Appendix A

Appendix A.1

Proof of Theorem 1.
It can be easily verified that for a given M ^ i 0 and ψ ^ i 0 , there exists a unique value k that satisfies Equation (A1).
l { l 1 , l 2 , , l k 1 } i = 1 N P i ( M ^ l = M ^ i 0 | M ^ i 0 , ψ ^ i 0 ) a i < 1 α . l { l 1 , l 2 , , l k } i = 1 N P i ( M ^ l = M ^ i 0 | M ^ i 0 , ψ ^ i 0 ) a i 1 α .
While k is related to the bootstrap samples, it converges to k under M ^ i 0 , ψ ^ i 0 , and P i ( · ) = P ( · | M ^ i 0 , ψ ^ i 0 ) . Thus, according to Equation (9), the following Equation (A2) holds:
1 α l { l 1 , l 2 , , l k } i = 1 N 1 B b = 1 B I ( M ^ i , l [ b ] = M ^ i 0 ) a i = l { l 1 , l 2 , , l k } i = 1 N 1 B b = 1 B I ( M ^ i , l [ b ] = M ^ i 0 ) a i + o p 1 , , N ( 1 ) = l { l 1 , l 2 , , l k } i = 1 N P ( M ^ l = M ^ i 0 | M ^ i 0 , ψ ^ i 0 ) + o p i ( 1 ) a i + o p 1 , , N ( 1 ) l { l 1 , l 2 , , l k } i = 1 N P ( M ^ l = M ^ i 0 | M ^ i 0 , ψ ^ i 0 ) a i + i = 1 N k o p i ( 1 ) + o p 1 , , N ( 1 ) = i = 1 N P ( M ^ i 0 M ^ l 1 , , M ^ l k | M ^ i 0 , ψ ^ i 0 ) a i + i = 1 N k o p i ( 1 ) + o p 1 , , N ( 1 ) = i = 1 N P ( M o p t M ^ l 1 , , M ^ l k | M o p t , ψ ^ i ) a i I ( M ^ i 0 = M o p t ) + i = 1 N P ( M ^ i 0 M ^ l 1 , , M ^ l k | M ^ i 0 , ψ ^ i 0 ) a i I ( M ^ i 0 M o p t ) + i = 1 N k o p i ( 1 ) + o p 1 , , N ( 1 ) i = 1 N P ( M o p t M ^ l 1 , , M ^ l k | M o p t , ψ ^ o p t ) a i I ( M ^ i 0 = M o p t ) + i = 1 N a i I ( M ^ i 0 M o p t ) + i = 1 N k o p i ( 1 ) + o p 1 , , N ( 1 ) ,
where o p i ( 1 ) is with respect to P i ( · ) , i = 1 , 2 , , N ; o p 1 , , N ( 1 ) is with respect to P i ( · ) , i = 1 , 2 , , N ; and ψ ^ o p t is an estimator of ψ o p t . According to Assumption (A3), for any ψ ^ o p t , we have the following:
P ( M o p t M ^ l 1 , , M ^ l k | M o p t , ψ ^ o p t ) P ( M o p t M ^ l 1 , , M ^ l k | M o p t , ψ o p t ) + c ψ ^ o p t ψ o p t = P ( M o p t M ^ l 1 0 , , M ^ l k 0 ) + c ψ ^ o p t ψ o p t ,
using the fact that, when the samples are drawn under ( M o p t , ψ o p t ) , there is no need to have the stars in the notation.
Combining (A2) and (A3), we have the following:
1 α i = 1 N P ( M o p t M ^ l 1 0 , , M ^ l k 0 ) + c ψ ^ o p t ψ o p t a i I ( M ^ i 0 = M o p t ) + i = 1 N a i I ( M ^ i 0 M o p t ) + i = 1 N k o p i ( 1 ) + o p 1 , , N ( 1 ) .
It is easy to see that o p i ( 1 ) and o p 1 , , N ( 1 ) on the right side of (A4) are bounded quantities. Therefore, by the dominated convergence theorem [31], we have the following:
E o p i ( 1 ) = o ( 1 ) E o p 1 , , N ( 1 ) = o ( 1 ) ,
where E is with respect to the joint distribution of y and y [ b ] . We now take the expectation on both sides of (A4). Note that the probability P ( M o p t M ^ l 1 0 , , M ^ l k 0 ) is nonrandom and
E ψ ^ o p t ψ o p t I ( M ^ i 0 = M o p t ) = E E | ψ ^ o p t ψ o p t | I ( M ^ i 0 = M o p t ) | y = E ψ ^ o p t ψ o p t I ( M ^ i 0 = M o p t ) ,
with the latter expectation with respect to y only. Therefore, by taking the expectation on both sides of Equation (A4), we have the following:
1 α P ( M o p t M ^ l 1 0 , , M ^ l k 0 ) i = 1 N a i P ( M ^ i 0 = M o p t ) + i = 1 N c E | ψ ^ o p t ψ o p t | a i I ( M ^ i 0 = M o p t ) + i = 1 N a i P ( M ^ i 0 M o p t ) + o ( 1 ) .
Finally, when i = 1 N a i P ( M ^ i 0 = M o p t ) > 0 , it can be observed that P ( M o p t M ^ l 1 0 , , M ^ l k 0 ) = P ( M o p t M ˜ 1 0 , , M ˜ k 0 ) . Thus, the proof is complete. □

Appendix A.2

Proof of Theorem 2.
For any γ > 0 , as n tends toward infinity, if λ n satisfies λ n / n 0 and λ n n ( γ 1 ) / 2 , then the corresponding β ^ n satisfies lim n + P ( A n = A ) = 1 [22]. Therefore, for γ > 0 ( γ = 1 is a special case), as n tends toward infinity, there exists λ n such that the model formed by all nonzero variables in the solution of Equation (7) corresponds to the true model. Since the solutions of Equations (8) and (7) are equivalent in terms of the question of whether a variable is zero or nonzero, the model formed by all nonzero variable solutions obtained from Equation (8) at λ n also corresponds to the true model.
It is easy to observe that Equation (8) is a classic lasso problem, and its solution to λ n is piecewise linear. The lars algorithm can efficiently provide all solutions for λ n [ 0 , ) [21]. In each iteration, the lars algorithm explicitly provides all solutions for λ n [ a , b ] , 0 < = a < b < . The specific values of a and b are determined based on the current iteration. It is noteworthy that the active set of variables remains unchanged throughout each iteration. Assuming that λ n falls within the range covered by the lars algorithm in the i-th step, denoting the model formed by the active variables at this point as M i , it can be stated that lim n + P ( M o p t = M i ) = 1 . Let the lars algorithm progress for a total of N steps and consider the model path M r = { M 1 , , M N } = { M 1 , , M N } , where M i , i = 1 , , N are all distinct. Then, there exists a specific l { 1 , , N } such that lim n + P ( M o p t = M l ) = 1 . Finally, by the additivity of probability, we have the following:
lim n + P ( M o p t M r ) = lim n + P ( M o p t { M 1 , , M N } ) = lim n + i = 1 N P ( M o p t = M i ) = i = 1 N lim n + P ( M o p t = M i ) = lim n + P ( M o p t = M l ) = 1 .

References

  1. Preacher, K.; Merkle, E. The problem of model selection uncertainty in structural equation modeling. Psychol. Methods 2012, 17, 1. [Google Scholar] [CrossRef] [PubMed]
  2. Ding, J.; Tarokh, V.; Yang, Y. Model selection techniques: An overview. IEEE Signal Process. Mag. 2018, 35, 6–34. [Google Scholar] [CrossRef]
  3. Draper, D. Assessment and propagation of model uncertainty. J. R. Stat. Soc. Ser. B Stat. Methodol. 1995, 57, 45–70. [Google Scholar] [CrossRef]
  4. Chatfield, C. Model uncertainty, data mining and statistical inference. J. R. Stat. Soc. Ser. A Stat. Soc. 1995, 158, 419–444. [Google Scholar] [CrossRef]
  5. Lubke, G.; Campbell, I.; McArtor, D.; Miller, P.; Luningham, J.; van den Berg, S. Assessing model selection uncertainty using a bootstrap approach: An update. Struct. Equ. Model. Multidiscip. J. 2017, 158, 230–245. [Google Scholar] [CrossRef] [PubMed]
  6. Claeskens, G.; Hjort, N. Model Selection and Model Averaging; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  7. Yang, Y. Adaptive regression by mixing. J. Am. Stat. Assoc. 2001, 96, 574–588. [Google Scholar] [CrossRef]
  8. Yang, Y. Regression with multiple candidate models: Selecting or mixing? Stat. Sin. 2003, 13, 783–809. [Google Scholar]
  9. Hoeting, J.; Madigan, D.; Raftery, A.; Volinsky, C. Bayesian model averaging: A tutorial (with comments by M. Clyde, David Draper and EI George, and a rejoinder by the authors). Stat. Sci. 1999, 14, 382–417. [Google Scholar] [CrossRef]
  10. Chipman, H.; George, E.; McCulloch, R.; Clyde, M.; Foster, D.; Stine, R. The practical implementation of Bayesian model selection. Lect. Notes Monogr. Ser. 2001, 38, 65–134. [Google Scholar]
  11. Chen, L.; Giannakouros, P.; Yang, Y. Model combining in factorial data analysis. J. Stat. Plan. Inference 2007, 137, 2920–2934. [Google Scholar] [CrossRef]
  12. Hansen, P.; Lunde, A.; Nason, J. The model confidence set. Econometrica 2011, 79, 453–497. [Google Scholar] [CrossRef]
  13. Lubke, G.; Campbell, I. Inference based on the best-fitting model can contribute to the replication crisis: Assessing model selection uncertainty using a bootstrap approach. Struct. Equ. Model. Multidiscip. J. 2016, 23, 479–490. [Google Scholar] [CrossRef] [PubMed]
  14. Ferrari, D.; Yang, Y. Confidence sets for model selection by F-testing. Stat. Sin. 2015, 1637–1658. [Google Scholar] [CrossRef]
  15. Zheng, C.; Ferrari, D.; Yang, Y. Model selection confidence sets by likelihood ratio testing. Stat. Sin. 2019, 29, 827–851. [Google Scholar] [CrossRef]
  16. Zheng, C.; Ferrari, D.; Zhang, M.; Baird, P. Ranking the importance of genetic factors by variable-selection confidence sets. J. R. Stat. Soc. Ser. C Appl. Stat. 2019, 68, 727–749. [Google Scholar] [CrossRef]
  17. Liu, X.; Li, Y.; Jiang, J. Simple measures of uncertainty for model selection. Test 2021, 30, 673–692. [Google Scholar] [CrossRef]
  18. Donoho, D. High-dimensional data analysis: The curses and blessings of dimensionality. AMS Math Chall. Lect. 2000, 1, 32. [Google Scholar]
  19. Li, Y.; Luo, Y.; Ferrari, D.; Hu, X.; Qin, Y. Model confidence bounds for variable selection. Biometrics 2019, 75, 392–403. [Google Scholar] [CrossRef]
  20. Li, Y.; Jiang, J. Measures of Uncertainty for Shrinkage Model Selection. Stat. Sin. 2023. preprint. [Google Scholar] [CrossRef]
  21. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least Angle Regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef]
  22. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef]
  23. Yuan, Z.; Yang, Y. Combining linear regression models: When and how? J. Am. Stat. Assoc. 2005, 100, 1202–1214. [Google Scholar] [CrossRef]
  24. Efron, B. Bootstrap methods: Another look at the jackknife. Ann. Statist. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  25. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  26. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  27. Yuan, M.; Lin, Y. Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat. Methodol. 2006, 68, 49–67. [Google Scholar] [CrossRef]
  28. Rosset, S.; Zhu, J. Piecewise linear regularized solution paths. Ann. Stat. 2007, 1012–1030. [Google Scholar] [CrossRef]
  29. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 461–464. [Google Scholar] [CrossRef]
  30. Breiman, L. Heuristics of instability and stabilization in model selection. Ann. Stat. 1996, 24, 2350–2383. [Google Scholar] [CrossRef]
  31. Jiang, J. Large Sample Techniques for Statistics; Springer: New York, NY, USA, 2010. [Google Scholar]
Figure 1. ECP and ECP/AM for AMac and Mac at a confidence level of α = 0.1 .
Figure 1. ECP and ECP/AM for AMac and Mac at a confidence level of α = 0.1 .
Mathematics 12 00664 g001
Figure 2. ECP variation with noise for AMac and Mac.
Figure 2. ECP variation with noise for AMac and Mac.
Mathematics 12 00664 g002
Figure 3. Running time of the four methods.
Figure 3. Running time of the four methods.
Mathematics 12 00664 g003
Figure 4. Relationship between ECP, ECP/AM and sample size for the four methods.
Figure 4. Relationship between ECP, ECP/AM and sample size for the four methods.
Mathematics 12 00664 g004
Figure 5. The MCS constructed by the four methods. The left panel (a) displays the heatmaps of MCS constructed by the four methods at a 90% confidence level. The horizontal axis represents the names of explanatory variables, while the vertical axis lists the methods used to construct MCS and the number of models included in MCS constructed by each method. In the figure, each row represents a model, and a blank cell indicates that the corresponding explanatory variable is not included in the model. For example, the MCS constructed by the Mac method includes two models. The first model is composed of explanatory variables sex, bmi, map, hdl, ltg, while the second model is composed of the explanatory variables sex, bmi, map, tc, ldl, and ltg. The right panel (b) displays the heatmaps of model confidence sets (MCS) constructed by the four methods at a 95% confidence level.
Figure 5. The MCS constructed by the four methods. The left panel (a) displays the heatmaps of MCS constructed by the four methods at a 90% confidence level. The horizontal axis represents the names of explanatory variables, while the vertical axis lists the methods used to construct MCS and the number of models included in MCS constructed by each method. In the figure, each row represents a model, and a blank cell indicates that the corresponding explanatory variable is not included in the model. For example, the MCS constructed by the Mac method includes two models. The first model is composed of explanatory variables sex, bmi, map, hdl, ltg, while the second model is composed of the explanatory variables sex, bmi, map, tc, ldl, and ltg. The right panel (b) displays the heatmaps of model confidence sets (MCS) constructed by the four methods at a 95% confidence level.
Mathematics 12 00664 g005
Table 1. Simulation results of AMac and Mac under different parameters.
Table 1. Simulation results of AMac and Mac under different parameters.
Level (%)nMethod σ = 1.0 σ = 2.0 σ = 3.0
ECPAMECP/AM CV ECPAMECP/AM CV ECPAMECP/AM CV
90100AMac0.9971.1880.8400.3330.8181.8950.4320.3290.6192.2900.2710.353
Mac0.9961.1310.8800.3080.6991.7820.3920.4110.5612.1320.2630.420
125AMac0.9991.1080.9020.2800.8531.7520.4870.3130.6492.1160.3070.374
Mac0.9991.0590.9430.2240.7381.6400.4500.3880.5711.9760.2890.450
150AMac0.9991.0800.9250.2510.8871.6820.5270.3160.6741.9860.3400.374
Mac0.9991.0380.9630.1840.7931.5840.5000.3730.5821.8630.3120.449
175AMac1.0001.0620.9410.2260.9241.6190.5710.3250.6881.8870.3650.368
Mac1.0001.0270.9740.1560.8471.5340.5520.3670.5811.7620.3300.443
200AMac0.9991.0560.9470.2170.9441.5570.6070.3390.7091.8090.3920.356
Mac0.9991.0240.9760.1480.8871.4870.5970.3650.5921.6880.3510.430
95100AMac1.0001.4190.7050.3730.9392.4850.3780.2870.7853.0030.2620.305
Mac0.9991.3410.7460.3790.9202.3430.3930.2970.7452.7490.2710.348
125AMac1.0001.2300.8130.3510.9492.2610.4200.2860.8182.8320.2900.318
Mac1.0001.1490.8700.3190.9142.1500.4260.3020.7822.6230.2990.354
150AMac1.0001.1440.8740.3100.9562.1210.4510.3070.8412.6080.3230.331
Mac1.0001.0700.9350.2410.9042.0170.4480.3350.7952.4330.3270.364
175AMac1.0001.1060.9050.2780.9591.9790.4850.3210.8412.4720.3410.341
Mac1.0001.0360.9650.1790.9021.8890.4780.3520.7762.3020.3380.380
200AMac1.0001.0840.9230.2550.9691.8710.5180.3370.8482.3370.3630.345
Mac1.0001.0240.9760.1490.9151.7910.5110.3670.7652.1710.3530.392
Table 2. Simulation results for the four methods at 90% confidence level.
Table 2. Simulation results for the four methods at 90% confidence level.
nMethod σ = 1.0 σ = 2.0 σ = 3.0
ECPAMECP/AM CV ECP AL ECPAMECP/AM CV ECP AL ECPAMECP/AM CV ECP AL
100AMac0.9842.3680.4160.305NA0.8983.4540.2600.398NA0.6464.2280.1530.406NA
Mac0.9822.2220.4420.318NA0.8683.1480.2760.486NA0.5783.6000.1610.546NA
AL-AMac0.9781.4120.6930.3691.0000.7561.8520.4080.2950.8560.3441.6140.2130.3660.500
AL-Mac0.9761.2240.7970.3561.0000.6441.6500.3900.3720.8560.2061.3300.1550.3960.500
125AMac0.9802.1620.4530.282NA0.9122.8920.3150.359NA0.7263.6420.1990.457NA
Mac0.9762.0880.4670.311NA0.8922.6760.3330.412NA0.6783.2980.2060.594NA
AL-AMac0.9741.3380.7280.3781.0000.8361.8180.4600.2880.8860.4881.7300.2820.3360.630
AL-Mac0.9681.1620.8330.3311.0000.7161.6200.4420.3580.8860.2981.4300.2080.4180.630
150AMac0.9841.8760.5250.361NA0.9082.6340.3450.338NA0.7563.2980.2290.424NA
Mac0.9841.7620.5580.396NA0.9002.5160.3580.374NA0.7203.0780.2340.489NA
AL-AMac0.9801.2820.7640.3581.0000.8381.7100.4900.3000.8980.5641.7280.3260.3160.686
AL-Mac0.9781.1120.8790.2901.0000.7601.5780.4820.3450.8980.4301.5020.2860.3760.686
175AMac0.9901.6580.5970.434NA0.9522.5420.3750.386NA0.7843.0620.2560.448NA
Mac0.9901.5220.6500.464NA0.9402.4520.3830.455NA0.7502.8480.2630.520NA
AL-AMac0.9841.2300.8000.3541.0000.8961.7440.5140.3160.9500.5781.7300.3340.3460.712
AL-Mac0.9821.0920.8990.2771.0000.8481.6120.5260.3560.9500.4541.5120.3000.3960.712
200AMac0.9841.4380.6840.457NA0.9642.3740.4060.417NA0.7902.7980.2820.435NA
Mac0.9861.3100.7530.503NA0.9322.2800.4090.484NA0.7502.6120.2870.512NA
AL-AMac0.9781.2020.8140.3381.0000.9041.6780.5390.3320.9620.6201.6940.3660.3320.768
AL-Mac0.9781.0940.8940.2791.0000.8561.5740.5440.3620.9620.4921.5020.3280.3830.768
Table 3. Simulation results of AL-AMac and AL-Mac with α = 0.1 and σ = 2.0.
Table 3. Simulation results of AL-AMac and AL-Mac with α = 0.1 and σ = 2.0.
Varia_numnMethod σ = 2.0
ECPAMECP/AM CV ECP AL
(11, 3)100AL-AMac0.6162.0100.3060.3030.670
AL-Mac0.5561.7340.3210.3930.670
200AL-AMac0.8881.7880.4970.3220.936
AL-Mac0.8621.6300.5290.3750.936
400AL-AMac0.9801.3000.7540.3630.996
AL-Mac0.9801.1940.8210.3440.996
(21, 5)100AL-AMac0.4962.4320.2040.2910.584
AL-Mac0.4482.1600.2070.4080.584
200AL-AMac0.7762.0380.3810.3170.860
AL-Mac0.7421.8200.4080.4010.860
400AL-AMac0.9201.5540.5920.3920.994
AL-Mac0.9041.3500.6700.4140.994
(41, 9)100AL-AMac0.2522.8720.0880.2990.402
AL-Mac0.2242.6080.0860.4220.402
200AL-AMac0.6002.4220.2480.3080.810
AL-Mac0.5642.1000.2690.4380.810
400AL-AMac0.8261.9080.4330.3600.984
AL-Mac0.8021.6000.5010.4580.984
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wen, F.; Jiang, J.; Luan, Y. Model Selection Path and Construction of Model Confidence Set under High-Dimensional Variables. Mathematics 2024, 12, 664. https://doi.org/10.3390/math12050664

AMA Style

Wen F, Jiang J, Luan Y. Model Selection Path and Construction of Model Confidence Set under High-Dimensional Variables. Mathematics. 2024; 12(5):664. https://doi.org/10.3390/math12050664

Chicago/Turabian Style

Wen, Faguang, Jiming Jiang, and Yihui Luan. 2024. "Model Selection Path and Construction of Model Confidence Set under High-Dimensional Variables" Mathematics 12, no. 5: 664. https://doi.org/10.3390/math12050664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop