Next Article in Journal
OFPI: Optical Flow Pose Image for Action Recognition
Next Article in Special Issue
Modeling Under-Dispersed Count Data by the Generalized Poisson Distribution via Two New MM Algorithms
Previous Article in Journal
Impact of Machine Learning and Artificial Intelligence in Business Based on Intuitionistic Fuzzy Soft WASPAS Method
Previous Article in Special Issue
Mathematical Analysis and Modeling of the Factors That Determine the Quality of Life in the City Councils of Chile
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Quantile-Based Approach for LASSO Estimation

1
Department of Statistics, Quaid-i-Azam University, Islamabad 45320, Pakistan
2
Department of Statistics and Operation Research, College of Science, Qassim University, Buraydah 51482, Saudi Arabia
3
Department of Basic Sciences, College of Science and Theoretical Studies, Saudi Electronic University, Riyadh 11673, Saudi Arabia
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(6), 1452; https://doi.org/10.3390/math11061452
Submission received: 26 January 2023 / Revised: 13 March 2023 / Accepted: 14 March 2023 / Published: 16 March 2023
(This article belongs to the Special Issue Computational Statistics and Data Analysis)

Abstract

:
Regularization regression techniques are widely used to overcome a model’s parameter estimation problem in the presence of multicollinearity. Several biased techniques are available in the literature, including ridge, Least Angle Shrinkage Selection Operator (LASSO), and elastic net. In this work, we study the performance of the classical LASSO, adaptive LASSO, and ordinary least squares (OLS) methods in high-multicollinearity scenarios and propose some new estimators for estimating the LASSO parameter “k”. The performance of the proposed estimators is evaluated using extensive Monte Carlo simulations and real-life examples. Based on the mean square error criterion, the results suggest that the proposed estimators outperformed the existing estimators.
MSC:
62J05; 62J07; 62H20; 65C05; 00A72

1. Introduction

Regression analysis is a statistical technique that is concerned with the prediction of a response variable dependent on one or more regressors. The standard linear regression model assumes that there should not exist a strong correlation among predictors, as this could lead to the problem of so-called multicollinearity [1]. However, in many fields of study, the regressors can be highly intercorrelated. For instance, when studying the effect of weight and age on the sugar level of a human, both predictors can be highly correlated, as the weight increases with the increase in age. Similarly, if the age and price of a car are used as regressors in predicting a car’s average selling time, then both predictors can be highly intercorrelated. In the presence of highly correlated regressors, the ordinary least squares estimation can lead to wrong inferences. For example, the ordinary least squares (OLS) estimators are inconsistent in the presence of multicollinearity as they have large standard errors. In the case of perfect multicollinearity, the OLS estimators cannot be estimated uniquely [2].
In such situations, a very popular and successful approach in statistical modeling is to use regularization regression techniques, such as ridge regression or Least Angle Shrinkage Selection Operator (LASSO) estimation [3,4]. The main idea behind these regression techniques is to introduce biased estimators by penalizing the OLS estimators, which decreases the overall standard error of the estimator. By minimizing both the empirical error and the penalty, one can find a model that is well suited and also “simple”, avoiding the large variance that can occur when estimating complex models. However, ridge regression cannot generate a parsimonious model because it still retains all predictors in the model [5]. On the other hand, best-subset selection generates a sparse model, but because of its inherent discreteness, it is extremely variable, as discussed in [6]. To cope with these problems, the LASSO [4] is an example of a good concept in this category. Its popularity stems in part from the regularization induced by the LASSO’s L1 penalty, which results in sparse solutions. LASSO shrinks the estimated coefficient vector toward the origin (in the L1 sense) for different values of k, usually setting some of the coefficients to zero. As a result, the LASSO blends the ridge regression and subset selection characteristics, and it aims to be a useful method for variable selection. The main idea behind the LASSO is to introduce biased estimators in order to decrease the standard error of the estimator. However, bias and variance are complementary; increasing one causes a decrease in the other, and the inverse is also true. This trade-off between bias and variance can be controlled by the LASSO parameter k, which is known as the tuning parameter. References [4,7] compared the predictive performance of LASSO, ridge, and bridge regression and found that none of them dominated the other two uniformly [8]. Although the LASSO has been proven to be effective in various scenarios, it has some limitations. Consider the three scenarios below:
  • In the case of p > n , the LASSO selects at most n variables before it saturates because of the nature of the convex optimization problem. This seems to be a limiting feature for a variable selection method. Moreover, the LASSO is not well defined unless the bound on the L1-norm of the coefficients is smaller than a certain value [9].
  • The LASSO cannot perform group selection. If a group of predictors is highly correlated with themselves, the LASSO tends to pick only one among them and will shrink the others to zero [10].
  • For usual n > p situations, if there are high correlations between the predictors, then it has been empirically observed that the prediction performance of the LASSO is dominated by ridge regression [4].
To address these limitations, several modifications to the LASSO were proposed in the literature, namely the adaptive LASSO [10], fused LASSO [11], group LASSO [12], elastic net [9], degree of freedom of the LASSO [13], and square root LASSO [14]. In addition, different researchers proposed different estimation methods for the estimation of the LASSO parameter “k”, such as references [15,16,17,18,19,20,21,22] and the references cited therein.
This paper aims to study the performance of the LASSO and adaptive LASSO in handling severe multicollinearity among independent variables in the context of multiple regression analysis using Monte Carlo simulations and real-life examples. Furthermore, we propose some new estimators for estimating the LASSO parameter “k” using a quantile-based approach and compared them with existing estimators to assess the performance of the proposals.
The rest of this paper is structured as follows. The general methodology as well as the proposed methods for the estimation of the LASSO parameter “k” are described in Section 2. Section 3 contains information about the simulation settings, while the simulation results are discussed in Section 4. In Section 5, the performance of the proposals, as well as existing LASSO methods, is evaluated using real data. Finally, some concluding remarks are discussed in Section 6.

2. Methodology

Consider the following linear regression model:
Y = X β + ϵ
where Y = ( Y 1 , Y 2 , , Y n ) is an n × 1 vector of the response variable and
X = X 11 X 12 X 1 p X 21 X 22 X 2 p X n 1 X n 2 X n p
is a n × p matrix (also known as design matrix) of the observed regressors, β = ( β 1 , β 2 , , β p ) is a p × 1 vector of unknown regression parameters, and ϵ = ( ϵ 1 , ϵ 2 , , ϵ n ) is an n × 1 vector of random errors. It is assumed that ϵ is normally distributed with a zero mean and a covariance matrix σ 2 I n , where I n represents an identity matrix of the order n. In general, the parameter β ^ is estimated using the OLS that minimizes the following squared differences:
β ^ o l s = a r g m i n β R | | Y X β | | 2
where | | . | | 2 denotes the L 2 penalty. As a result, the OLS estimator β ^ o l s can be estimated as follows:
β ^ o l s = ( X X ) 1 X Y
and its covariance can be computed as follows:
cov ( β ^ o l s ) = σ 2 ( X X ) 1 .
Note that both the estimator and its covariance are heavily dependent on the matrix ( X X ) 1 . However, it is well known that in the presence of high correlation among predictors, the matrix ( X X ) 1 is ill-conditioned, and consequently, β ^ o l s is highly inconsistent and has a large variance [5]. To cope with this issue, the LASSO is an alternate estimation procedure that is defined as
β ^ l a s s o = a r g m i n β R | | Y X β | | 2 + k | | β | | 1
where the first argument assesses the fit while the second argument penalizes the parameter β . The parameter k is called the LASSO parameter, which is a trade-off between the fit and the penalty. Thus, the choice of k is an important task in conducting LASSO regression.
Different extensions have been introduced for the LASSO. For example, the adaptive LASSO seeks to minimize
β ^ A . l a s s o = a r g m i n β R | | Y X β | | 2 + k j = 1 p ω j ^ | β j |
where k is the adaptive LASSO parameter, β j represents the estimated coefficients, and ω j ^ is called the adaptive weights vector, which is defined as follows:
ω j ^ = 1 ( | β ^ j i n i | ) γ
where β ^ j i n i refers to an initial estimate of the coefficients and γ is a positive constant for adjustment of the adaptive weights vector [10].
As the LASSO estimation is often a nonlinear and non-differentiable response value function, it is difficult to obtain a reliable estimate of its standard error. One approach is through the bootstrap, where either k can be set, or for each bootstrap sample, we can optimize over k. Fixing k is analogous to selecting the best subset and then using the least squares standard error for that subset.
Although a closed-form solution for the LASSO estimator is not possible because the solution is nonlinear in the response variable, an approximate closed-form estimate may be derived by writing the penalty | β j | as β j 2 / | β j | . Hence, at the LASSO estimate β ˜ , we may approximate the solution with a ridge regression of the form β = ( X X + k W ) 1 X y , where W is a diagonal matrix with diagonal elements | β ˜ | , W denotes the generalized inverse of W, and k is chosen so that | β j | = k [4]. Thus, the LASSO estimation problem becomes a ridge estimation.
To understand how the estimator is built, suppose that there exists an orthogonal matrix D such that λ = D C D and λ = diag ( λ 1 , λ 2 , λ 3 ,..., λ p ) contain the eigenvalues of matrix C = X X . Then, the modified form of Equation (1) is
Y = X * α + ϵ
where X * = X D and α = D β . Consequently, the generalized LASSO regression estimator can be written as
α ^ ( k ) = ( X * X * + k W ) 1 X * Y = ( I p + k W X * X * ) 1 α ^
where k = diag ( k 1 , k 2 , k 3 , , k p ) and α ^ = ( X * X * ) 1 X * Y is the OLS estimate of α .

Modified LASSO Estimators

Using Equation (6), some new modified estimators are proposed in this section. In particular, we use different percentiles, such as the 5th percentile ( P 05 ) , 25th percentile ( Q 1 ) , 50th percentile ( Q 2 ), 75th percentile ( Q 3 ) and 90th percentile ( P 90 ) , of the vector H i = C m a x | α i | ^ , where C m a x is the maximum eigenvalue of the matrix C and α i ^ is the ith element of α ^ . In particular, for the vector H i , the qth percentile is defined as P ( H < h ) q / 100 or, equivalently, P ( H h ) 1 q / 100 . An approximate value of the percentile is obtained through a linear interpolation of the modes for the order statistics from the uniform distribution on [ 0 , 1 ] , as the R function quantile() does [23]. Mathematically, one can use h γ + ( γ γ ) ( h γ h γ ) , where γ = q * ( n 1 ) / 100 + 1 , . denotes the largest nearest integer of the specified value, . returns the lowest integer that is greater than or equal to the given number, and n is the size of the vector H i . The main reason for considering percentiles is their robustness against outliers. In particular, the following are the proposed modified estimators.
1: The first proposal to estimate k is IHS1, which is defined as
IHS 1 = H P 05
where H i = C m a x | α i | ^ and P 05 denotes the fifth percentile.
2: The second proposal is IHS2, defined as
IHS 2 = H Q 1
where Q 1 denotes the first quartile.
3: To estimate k, the next proposal is IHS3, defined as
IHS 3 = H Q 2
where Q 2 denotes the second quartile.
4: The fourth proposal is
IHS 4 = H Q 3
where Q 3 denotes the third quartile.
5: Next, we propose IHS5, which is defined as
IHS 5 = H P 90
where P 90 denotes the 90th percentile.
It is worth mentioning that many other types of shrinkage estimators can be considered for comparison purposes (e.g., [24]). However, we restricted the comparison of our proposed LASSO estimators to classical LASSO estimators which are widely used in the literature.

3. Simulation Settings

This section discusses a comprehensive simulation study that involved varying the number of independent variables, sample, size, correlation coefficient, and residual variance. For each simulation case, all covariates were centered and standardized to have a mean of zero and standard deviation of one. The predictors were generated as follows:
X i j = ( 1 ρ 2 ) 1 2 Z i j + ρ Z i
where Z ij represents random numbers generated from the standard normal distribution and ρ is a high correlation coefficient value, indicating strong correlation among predictors. We considered three different correlation coefficient values (i.e., ρ = 0.90, 0.95, and 0.99). In addition, to evaluate the effect of the sample size, different sample sizes such as n = 50, 100, and 150 were considered. Furthermore, we considered p = 4, 8, and 16 with variances σ 2 = 1, 3, 5, 7, and 9 for the error terms to evaluate their effects. Thus, the errors were generated as follows:
1.
ϵ i followed the independent normal (0, 1);
2.
ϵ i followed the independent normal (0, 3);
3.
ϵ i followed the independent normal (0, 5);
4.
ϵ i followed the independent normal (0, 7);
5.
ϵ i followed the independent normal (0, 9).
To study the performance of the OLS, LASSO, adaptive LASSO, and proposed LASSO estimators, we computed the MSE using the following equation:
M S E = i = 1 N ( β ^ β ) ( β ^ β ) N ,
where β ^ is the estimator of β obtained through estimators and N is the number of replicates used in the Monte Carlo simulation. To achieve a reliable estimate, the simulation studies were repeated 2000 times, and thus 2000 MSEs were computed, with one for every replication.

Simulation Table Settings

Different combinations of varying values for ρ , n , and σ 2 are considered in Table 1, Table 2 and Table 3, with p = 4. In particular, high values for the correlation coefficient were considered (i.e., ρ = 0.90, 0.95, and 0.99). To assess the effect of the sample sizes, n = 50, 100, and 150 were considered. Furthermore, the error variances σ 2 = 1, 3, 5, 7, and 9 were used to evaluate their effect on the proposed estimator performance.
In Table 4, Table 5 and Table 6, the values of ρ , n , and σ 2 were the same as those used in the first case. However, the number of variables was increased to p = 8 to assess the effect of the number of variables on the simulation studies. Table 7, Table 8 and Table 9 report the results when considering the number of explanatory variables to be 16 (i.e., p = 16). The choices for other variables remained the same as those used in the first two cases.

4. Simulation Results

The simulation results for the proposed estimators, along with some existing estimators (OLS, LASSO, and adaptive LASSO), are given in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. Table 1 reports the results for different estimators used in the study considering ρ = 0.90, 0.95, and 0.99, n = 50, p = 4, and σ = 1, 3, 5, 7, and 9. From these results, it is evident that the proposed estimators outperformed the OLS and existing LASSO estimators. The poor performance of the OLS estimator was evident from the results, as it provided higher values of MSEs. Furthermore, one can see that when ρ = 0.90 and σ = 1, 3, 5, 7, the proposed estimator IHS1 performed relatively well, whereas when σ = 9, IHS2 outperformed all other estimators. For ρ = 0.9, the smallest obtained MSE value was 0.336, which was obtained by IHS1. In the case of ρ = 0.95, when σ = 1, 3, 5, or 7, IHS1 outperformed all other estimators, whereas for σ = 9, the performance of IHS3 was better than other estimators. By increasing the correlation coefficient value ρ to 0.99, IHS1 performed relatively well compared with all other estimators, especially for small values of σ . As the value of σ increased, IHS2 and IHS3 outperformed the other estimators. Given the value of σ , the proposed estimators’ increases in MSEs were significantly less than those of the OLS and the existing LASSO estimators.
When considering ρ = 0.90, 0.95, and 0.99, p = 4, and σ = 1, 3, 5, 7, and 9, the results are listed in Table 2 with n = 100. This suggests that as the sample sizes increased, the proposed estimators IHS1 outperformed all other estimators. By increasing σ , the MSEs increased for the OLS and existing LASSO estimators. However, they decreased for the proposed LASSO estimators. The smallest values of the MSEs when considering ρ = 0.90, 0.95, and 0.99 were 0.306, 0.239, and 0.151, all of which were produced by the IHS1 proposal. Note that the existing estimators produced relatively large MSE values, especially for large values of σ . The table also shows the OLS’s poor performance.
Table 3 reports the results for different estimators used in the study when considering the same values for ρ , p, and σ with n = 150. This table shows that the proposed estimator IHS1 uniformly outperformed all other estimators. Note that for all combinations of different parameters, IHS1, IHS2, and IHS3 were the best three estimators, indicating the good performance of the proposed estimators. The MSEs decreased with the increase in the value of σ for the proposed estimators. However, they increased for the existing LASSO and the OLS estimators. Furthermore, the existing LASSO and the OLS estimators produced significantly large MSEs compared with the proposed estimators.
The regressors in the simulation study were increased from p = 4 to p = 8 to assess the effect of the number of explanatory variables listed in Table 4, Table 5 and Table 6. The parameter values ρ = 0.90, 0.95, and 0.99 and σ = 1, 3, 5, 7, and 9 were considered in these tables. However, the sample size varied by table, as Table 4, Table 5, and Table 6 considered n = 50, 100, and 150, respectively. The proposed estimators, according to these tables, produced significantly lower MSEs than the existing estimators. As previously noted, the proposed estimators IHS1 outperformed all others by producing extremely low MSEs. In comparison with the earlier outcomes, the proposed estimators IHS4 and IHS5 improved their rankings. This suggests that as p and n increased, the proposed estimators were more accurate and reliable than the existing LASSO and OLS estimators. Furthermore, the poor performance of the OLS estimators is shown in these tables, demonstrating that multicollinearity prevents them from being used.
To evaluate the effectiveness of the proposals, the number of regressors was increased from 8 to 16, and the results are shown in Table 7, Table 8 and Table 9. In Table 7, ρ = 0.90, 0.95, and 0.99, n = 50 , p = 16 , and σ = 1, 3, 5, 7, and 9 are taken into account. The same parameter values are specified in Table 8 with n = 100 . According to these tables, the proposed estimators outperformed the existing LASSO and OLS estimators. The first five rankings in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 were attained by the five proposals, which show that they provided superior results in a high-dimensional context. IHS1 produced the best results, and as the sample size increased, the corresponding MSEs were reduced.

5. Real Data Application

This section provides illustrative examples to assess the performance of the proposed estimators in real life.

5.1. Case Study 1

Cruise-ship-info data, which were freely available from the University of Florida website (http://users.stat.ufl.edu/~winner/datasets.html, accessed on 15 May 2021), were utilized to evaluate the performance of the suggested estimators in real-life scenarios. The data contain six predictors: the tonnage (weight of the ship), passengers (passengers on board in 100 s), cabins (number of cabins in 100 s), length (length of the ship in 100 s), pasgrden (passenger density), age (age until 2013), and an outcome variable (crew). Some descriptive statistics of the data are given in Table 10. From the table, one can see that the variables were on different scales. For example, the age variable ranged from 4 years to 48 years, while the tonnage variable ranged from 2.32 to 220. Thus, the variables were standardized before computing the results for different estimators.
Pearson’s correlation coefficient was calculated for each pair of variables in Table 11 in order to examine the correlation among predictors. The majority of the variables in this table were highly intercorrelated, as can be seen by the 0.95 correlation between passengers and the tonnage measure. Similar to this, there was a 0.98 correlation between passengers and cabins. These findings indicate that the data had a significant multicollinearity issue. The data set consists of a dependent variable, “crew”, and six independent variables: Y (the number of crews), x 1 (age until 2013), x 2 (weight of the ship in tons), x 3 (passengers on board), x 4 (length of the ship), x 5 (number of cabins), and x 6 (passenger density).
Thus, for this data set, the following model was considered:
Y i = i = 1 6 β i x i + ϵ i
The MSEs of the proposed estimators were computed using the following equation:
MSE ( β ^ ) = σ i = 1 p λ i ( λ i + k i ) 2 + i = 1 p k ^ i 2 λ i 2 ( λ i + k i ) 2
where k ^ = IHS1,⋯, IHS5 and λ i is the ith eigenvalue.
The results are listed in Table 12, suggesting that the existing as well as the proposed LASSO estimators performed well compared with the OLS estimator. Note that the proposed estimators IHS1, IHS2, and IHS3 outperformed the competitors. The smallest MSE of 0.985 was obtained with IHS1, followed by IHS2 and IHS3.

5.2. Case Study 2

To check the performance of the proposed LASSO estimators, the Body Fat Prediction data set was used. This data set was generously supplied by Dr. A. Garth Fisher, who permitted freely distributing the data and using them for non-commercial purposes (https://www.kaggle.com/datasets, accessed on 20 December 2022). Out of the 14 human body predictors in the data, 8 were found to be the most highly correlated with the outcome variable of the body fat percentage: weight (in lbs), neck circumference (in cm), chest circumference (in cm), abdomen circumference (cm), hip circumference (cm), thigh circumference (cm), knee circumference (cm), and wrist circumference (cm). The variables in this data set were on different scales, as shown by the descriptive statistics presented in Table 13, so they were normalized before the study.
Table 14 shows the Pearson’s correlation coefficient values for each pair of variables to assess the degree of linear association between the data variables. It is obvious that most of the variables were significantly intercorrelated. For example, the correlation coefficient value between weight and hip was 0.94, but the correlation coefficient value between abdomen and chest was 0.916, indicating that the data had a multicollinearity problem. The following are the variables in the data set:
Y: Body fat;
x 1 : Weight;
x 2 : Neck circumference;
x 3 : Chest circumference;
x 4 : Abdomen circumference;
x 5 : Hip circumference;
x 6 : Thigh circumference;
x 7 : Knee circumference;
x 8 : Biceps circumference;
x 9 : Wrist circumference.
The proposed estimators and competitors were estimated for this data, and the results are summarized in Table 15 using the MSE criterion defined in Equation (14). From this table, it is evident that the existing as well as the proposed estimators had the smallest MSEs compared with the OLS estimator. In addition, the proposed estimators IHS1 and IHS2 outperformed their competitors.

6. Conclusions

The multicollinearity issue in linear regression occurs when the regressors have a high degree of correlation. In the presence of this issue, the OLS estimators are inconsistent and do not produce accurate estimates. To resolve this concern, penalized regression approaches such as the LASSO estimator are extensively used. For estimating the LASSO parameter k, this work provided five new estimators. The estimators’ performance was affected by the standard deviation of the random error σ and the correlations between the explanatory variables ( ρ ), the sample size (n), and the number of variables p. To evaluate the effectiveness of estimators employing the MSE criterion, we conducted comprehensive Monte Carlo simulation studies and used real data sets. The findings revealed that the OLS estimator performed poorly when there was substantial predictor correlation.
The findings revealed that in terms of the MSEs, the recommended estimators consistently outperformed the OLS estimators, conventional LASSO estimators, adaptive LASSO estimators, and others. The MSE decreased as the sample size increased, even for high correlation coefficient values and σ . Furthermore, in both the simulation studies and real-world data cases, the suggested estimate IHS1 outperformed all other estimators. On the other hand, it is evident that the MSEs of the existing estimators increased as the number of variables (p), the error variance ( σ 2 ), and the correlation coefficient ( ρ ) between the independent variables increased.
The simulation and real data analysis led to the conclusion that the quartile-based LASSO’s parameter estimates had lower mean square errors than those of the OLS and other regularization regression technique type estimators. This work can be expanded in the future by assuming a multivariate response variable. More research can be performed to determine how well the suggested estimator performs when the response variable is distributed using another exponential distribution.

Author Contributions

Conceptualization, I.S.; Methodology, A.A.; Software, H.N.; Validation, I.S. and S.A.; Formal analysis, H.N.; Investigation, H.N.; Resources, S.A. and S.A.L.; Writing-original draft, I.S.; Writing—review & editing, S.A., A.A. and S.A.L.; Supervision, I.S.; Funding acquisition, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Scientific Research at Qassim University.

Data Availability Statement

Data sources are mentioned within the manuscript.

Acknowledgments

The researchers would like to thank the Deanship of Scientific Research at Qassim University for funding the publication of this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Montgomery, D.C.; Peck, E.A.; Vining, G.G. Introduction to Linear Regression Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  2. Shah, I.; Sajid, F.; Ali, S.; Rehman, A.; Bahaj, S.A.; Fati, S.M. On the performance of jackknife based estimators for ridge regression. IEEE Access 2021, 9, 68044–68053. [Google Scholar] [CrossRef]
  3. Hoerl, A.E.; Kennard, R.W. Ridge regression: Applications to nonorthogonal problems. Technometrics 1970, 12, 69–82. [Google Scholar] [CrossRef]
  4. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  5. Ali, S.; Khan, H.; Shah, I.; Butt, M.M.; Suhail, M. A comparison of some new and old robust ridge regression estimators. Commun. Stat. Simul. Comput. 2021, 50, 2213–2231. [Google Scholar] [CrossRef]
  6. Breiman, L. Better subset regression using the nonnegative garrote. Technometrics 1995, 37, 373–384. [Google Scholar] [CrossRef]
  7. Fu, W.J. Penalized regressions: The bridge versus the lasso. J. Comput. Graph. Stat. 1998, 7, 397–416. [Google Scholar]
  8. Frank, L.E.; Friedman, J.H. A statistical view of some chemometrics regression tools. Technometrics 1993, 35, 109–135. [Google Scholar] [CrossRef]
  9. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 301–320. [Google Scholar] [CrossRef] [Green Version]
  10. Zou, H. The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef] [Green Version]
  11. Tibshirani, R.; Saunders, M.; Rosset, S.; Zhu, J.; Knight, K. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 91–108. [Google Scholar] [CrossRef] [Green Version]
  12. Yuan, M.; Lin, Y. Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2006, 68, 49–67. [Google Scholar] [CrossRef]
  13. Zou, H.; Hastie, T.; Tibshirani, R. On the “degrees of freedom” of the lasso. Ann. Stat. 2007, 35, 2173–2192. [Google Scholar] [CrossRef]
  14. Belloni, A.; Chernozhukov, V.; Wang, L. Square-root lasso: Pivotal recovery of sparse signals via conic programming. Biometrika 2011, 98, 791–806. [Google Scholar] [CrossRef] [Green Version]
  15. Osborne, M.R.; Presnell, B.; Turlach, B.A. On the lasso and its dual. J. Comput. Graph. Stat. 2000, 9, 319–337. [Google Scholar]
  16. Knight, K.; Fu, W. Asymptotics for lasso-type estimators. Ann. Stat. 2000, 28, 1356–1378. [Google Scholar]
  17. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  18. Huang, F. Prediction error property of the lasso estimator and its generalization. Aust. N. Z. J. Stat. 2003, 45, 217–228. [Google Scholar] [CrossRef]
  19. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R.; others. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef] [Green Version]
  20. Meinshausen, N.; Bühlmann, P. High-dimensional graphs and variable selection with the lasso. Ann. Stat. 2006, 34, 1436–1462. [Google Scholar] [CrossRef] [Green Version]
  21. Zbonakova, L.; Härdle, W.K.; Wang, W. Time Varying Quantile Lasso; SFB 649 Discussion Paper, (047); 2016, SSRN. Available online: https://ssrn.com/abstract=2865608 (accessed on 12 March 2022).
  22. Herawati, N.; Nisa, K.; Setiawan, E.; Nusyirwan, N.; Tiryono, T. Regularized multiple regression methods to deal with severe multicollinearity. Int. J. Stat. Appl. 2018, 8, 167–172. [Google Scholar]
  23. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  24. Xiao, N.; Xu, Q.S. Multi-step adaptive elastic-net: Reducing false positives in high-dimensional variable selection. J. Stat. Comput. Simul. 2015, 85, 3755–3765. [Google Scholar] [CrossRef]
Table 1. Estimated MSEs for n = 50 and p = 4 (superscript represents ranking).
Table 1. Estimated MSEs for n = 50 and p = 4 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 1.507 8 0.837 5 0.836 4 0.545 1 0.654 2 0.77 3 0.865 6 0.896 7
3 4.513 8 0.953 7 0.952 6 0.401 1 0.469 2 0.566 3 0.702 4 0.759 5
5 12.55 8 2.003 7 1.98 6 0.336 1 0.404 2 0.496 3 0.625 4 0.685 5
7 23.851 8 4.089 6 4.319 7 0.367 1 0.395 2 0.467 3 0.5892 4 0.6516 5
9 39.426 8 7.738 6 7.652 7 0.53 5 0.478 4 0.486 3 0.574 2 0.629 1
0.95
1 1.057 8 0.837 6 0.86 5 0.492 1 0.57 2 0.692 3 0.811 4 0.853 7
3 9.394 8 0.953 7 0.95 6 0.313 1 0.392 2 0.491 3 0.632 4 0.694 5
5 26.174 8 1.988 7 1.984 6 0.262 1 0.327 2 0.423 3 0.558 4 0.623 5
7 49.606 8 4.074 6 4.312 7 0.32 1 0.335 2 0.403 3 0.528 4 0.595 5
9 82.138 8 7.695 7 7.606 6 0.514 3 0.441 2 0.434 1 0.519 4 0.581 5
0.99
1 5.817 8 0.837 7 0.836 6 0.336 1 0.417 2 0.516 3 0.658 4 0.719 5
3 51.569 8 0.953 7 0.95 6 0.153 1 0.223 2 0.338 3 0.488 4 0.561 5
5 144.17 8 1.998 6 2.011 7 0.166 1 0.198 2 0.3286 3 0.429 4 0.515 5
7 272.194 8 4.071 6 4.322 7 0.295 2 0.265 1 0.299 3 0.414 4 0.499 5
9 451.804 8 7.644 6 7.66 7 0.564 5 0.438 3 0.374 1 0.431 2 0.505 4
Table 2. Estimated MSEs for n = 100 and p = 4 (superscript represents ranking).
Table 2. Estimated MSEs for n = 100 and p = 4 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 1.26 8 0.871 4 0.871 4 0.559 1 0.704 2 0.816 3 0.897 6 0.922 7
3 2.239 8 0.82 6 0.822 7 0.443 1 0.51 2 0.618 3 0.749 4 0.8 5
5 6.774 8 1.09 7 1.087 6 0.349 1 0.425 2 0.526 3 0.656 4 0.714 5
7 12.69 8 1.949 7 1.868 6 0.306 1 0.375 2 0.468 3 0.6 4 0.663 5
9 21.761 8 3.402 6 3.43 7 0.318 1 0.362 2 0.446 3 0.576 4 0.638 5
0.95
1 1.54 8 0.871 6 0.87 5 0.515 1 0.624 2 0.749 3 0.857 4 0.891 7
3 4.655 8 0.882 6 0.822 7 0.369 1 0.444 2 0.545 3 0.682 4 0.74 5
5 13.998 8 1.084 7 1.081 6 0.271 1 0.354 2 0.458 3 0.591 4 0.652 5
7 25.986 8 1.952 7 1.871 6 0.239 1 0.305 2 0.401 3 0.537 4 0.603 5
9 44.175 8 3.43 7 3.41 6 0.269 1 0.303 2 0.385 3 0.517 4 0.584 5
0.99
1 3.114 8 0.871 7 0.87 6 0.394 1 0.466 2 0.575 3 0.723 4 0.781 5
3 28.016 8 0.821 6 0.821 6 0.197 1 0.283 2 0.393 3 0.536 4 0.602 5
5 73.613 8 1.088 7 1.084 6 0.139 1 0.204 2 0.312 3 0.453 4 0.53 5
7 145.00 8 1.988 7 1.925 6 0.151 1 0.182 2 0.271 3 0.412 4 0497 5
9 242.362 8 3.423 7 3.415 6 0.225 2 0.221 1 0.275 3 0.406 4 0.492 5
Table 3. Estimated MSEs for n = 150 and p = 4 (superscript represents ranking).
Table 3. Estimated MSEs for n = 150 and p = 4 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 1.164 8 0.885 4 0.889 5 0.589 1 0.751 2 0.855 3 0.918 6 0.938 7
3 1.6 8 0.807 6 0.809 5 0.467 1 0.538 2 0.66 3 0.787 4 0.833 7
5 4.601 8 0.921 7 0.913 6 0.387 1 0.462 2 0.565 3 0.698 4 0.754 5
7 8.709 8 1.273 6 1.283 7 0.337 1 0.416 2 0.513 3 0.644 4 0.703 5
9 14.255 8 2.076 6 2.166 7 0.324 1 0.39 2 0.483 3 0.612 4 0.672 5
0.95
1 1.342 8 0.885 5 0.887 6 0.541 1 0.675 2 0.798 3 0.884 4 0.911 7
3 3.454 8 0.81 7 0.809 6 0.396 1 0.47 2 0.582 3 0.721 4 0.776 5
5 9.203 8 0.917 7 0.912 6 0.309 1 0.395 2 0.496 3 0.634 4 0.695 5
7 18.187 8 1.296 7 1.269 6 0.264 1 0.347 2 0.447 3 0.584 4 0.649 5
9 29.684 8 2.095 6 2.146 7 0.25 1 0.311 2 0.407 3 0.545 4 0.611 5
0.99
1 1.882 8 0.885 6 0.888 7 0.431 1 0.507 2 0.631 3 0.768 4 0.818 5
3 19.02 8 0.81 7 0.809 6 0.223 1 0.319 2 0.428 3 0.571 4 0.634 5
5 50.485 8 0.916 7 0.914 4 0.156 1 0.234 2 0.346 3 0.494 4 0.568 5
7 100.005 8 1.302 7 1.327 6 0.143 1 0.2 2 0.304 3 0.453 4 0.533 5
9 162.632 8 2.14 6 2.166 7 0.17 1 0.195 2 0.278 3 0.42 4 0.506 5
Table 4. Estimated MSEs for n = 50 and p = 8 (superscript represents ranking).
Table 4. Estimated MSEs for n = 50 and p = 8 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 1.259 8 0.92 5 0.92 5 0.584 1 0.719 2 0.825 3 0.902 4 0.948 7
3 11.293 8 2.2 6 2.216 7 0.379 1 0.498 2 0.611 3 0.749 4 0.854 5
5 31.635 8 5.745 6 5.983 7 0.274 1 0.39 2 0.517 3 0.659 4 0.787 5
7 61.484 8 13.696 7 13.604 6 0.248 1 0.33 2 0.453 3 0.603 4 0.746 5
9 102.696 8 24.942 6 25.517 7 0.292 1 0.31 2 0.408 3 0.554 4 0.699 5
0.95
1 2.609 8 0.92 5 0.92 5 0.531 1 0.646 2 0.768 3 0.867 4 0.928 7
3 23.297 8 2.202 6 2.22 7 0.289 1 0.419 2 0.542 3 0.685 4 0.809 5
5 65.454 8 5.73 6 5.899 7 0.202 1 0.305 2 0.443 3 0.597 4 0.735 5
7 127.255 8 13.663 7 13.366 6 0.196 1 0.257 2 0.377 3 0.54 4 0.694 5
9 213.179 8 24.882 6 25.517 7 0.261 2 0.254 1 0.337 3 0.492 4 0.647 5
0.99
1 14.555 8 0.921 7 0.92 6 0.354 1 0.489 2 0.604 3 0.745 4 0.854 5
3 126.052 8 2.202 6 2.215 7 0.126 1 0.223 2 0.364 3 0.534 4 0.686 5
5 354.113 8 5.737 6 5.906 7 0.101 1 0.152 2 0.261 3 0.447 4 0.616 5
7 690.155 8 13.658 7 13.46 6 0.142 1 0.148 2 0.22 3 0.393 4 0.578 5
9 1159.25 8 24.793 6 25.515 7 0.251 3 0.192 1 0.214 3 0.353 4 0.535 5
Table 5. Estimated MSEs for n = 100 and p = 8 (superscript represents ranking).
Table 5. Estimated MSEs for n = 100 and p = 8 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 1.59 8 0.892 4 0.892 4 0.647 1 0.799 2 0.878 3 0.932 6 0.963 7
3 5.383 8 1.411 7 1.394 6 0.48 1 0.586 2 0.699 3 0.82 4 0.899 5
5 15.186 8 2.833 6 2.933 7 0.362 1 0.488 2 0.6 3 0.733 4 0.844 5
7 28.961 8 5.68 6 5.704 7 0.298 1 0.413 2 0.538 3 0.68 4 0.805 5
9 47.988 8 10.808 7 10.689 6 0.278 1 0.37 2 0.496 3 0.641 4 0.775 5
0.95
1 1.249 8 0.893 5 0.892 4 0.594 1 0.733 2 0.833 3 0.906 6 0.949 7
3 11.083 8 1.414 7 1.396 6 0.392 1 0.516 2 0.626 3 0.761 4 0.864 5
5 31.426 8 2.82 6 2.926 7 0.271 1 0.401 2 0.528 3 0.668 4 0.797 5
7 60.46 8 5.62 6 5.657 7 0.219 1 0.324 2 0.462 3 0.615 4 0.751 5
9 100.228 8 10.645 7 10.443 6 0.214 1 0.288 2 0.417 3 0.576 4 0.723 5
0.99
1 6.778 8 0.892 5 0.892 5 0.45 1 0.567 2 0.682 3 0.807 4 0.892 5
3 62.921 8 1.414 7 1.394 6 0.19 1 0.313 2 0.457 3 0.609 4 0.75 2
5 174.77 8 2.818 6 2.913 7 0.118 1 0.205 2 0.34 3 0.517 4 0.673 5
7 331.907 8 5.593 6 5.61 7 0.111 1 0.161 2 0.276 3 0.463 4 0.629 5
9 550.375 8 10.651 7 10.467 6 0.14 1 0.159 2 0.245 3 0.424 4 0.602 5
Table 6. Estimated MSEs for n = 150 and p = 8 (superscript represents ranking).
Table 6. Estimated MSEs for n = 150 and p = 8 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 1.366 8 0.893 4 0.892 3 0.674 1 0.832 2 0.897 5 0.943 6 0.97 7
3 3.209 8 1.166 6 1.179 7 0.525 1 0.628 2 0.744 3 0.85 4 0.918 5
5 9.098 8 2.108 7 2.04 6 0.415 1 0.526 2 0.632 3 0.769 4 0.869 5
7 17.78 8 3.679 6 3.755 7 0.34 1 0.458 2 0.571 3 0.708 4 0.827 5
9 30.61 8 6.778 7 6.62 6 0.31 1 0.407 2 0.528 3 0.668 4 0.793 5
0.95
1 2.761 8 0.893 4 0.893 4 0.621 1 0.772 2 0.859 3 0.921 6 0.958 7
3 6.662 8 1.166 6 1.178 7 0.45 1 0.559 2 0.672 3 0.798 4 0.887 5
5 18.899 8 2.103 7 2.041 6 0.322 1 0.446 2 0.561 3 0.704 4 0.825 5
7 36.954 8 3.655 6 3.749 7 0.253 1 0.389 2 0.498 3 0.642 4 0.777 5
9 63.836 8 6.678 7 6.611 6 0.225 1 0.32 1 0.452 3 0.603 4 0.739 5
0.99
1 4.156 8 0.893 5 0.893 5 0.493 1 0.604 2 0.724 3 0.836 4 0.91 7
3 36.321 8 1.165 6 1.175 7 0.243 1 0.373 2 0.506 3 0.642 4 0.787 5
5 103.124 8 2.099 7 2.043 6 0.145 1 0.243 2 0.383 3 0.555 4 0.701 5
7 201.838 8 3.646 6 3.742 7 0.117 1 0.187 2 0.311 3 0.492 4 0.653 5
9 350.325 8 6.595 7 6.576 6 0.123 1 0.166 2 0.271 3 0.451 4 0.617 5
Table 7. Estimated MSEs for n = 50 and p = 16 (superscript represents ranking).
Table 7. Estimated MSEs for n = 50 and p = 16 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 3.189 8 1.322 6 1.335 7 0.661 1 0.801 2 0.88 3 0.937 4 0.968 5
3 28.421 8 6.363 7 6.296 6 0.423 1 0.588 2 0.706 3 0.831 4 0.913 5
5 79.64 8 17.32 7 17.12 6 0.292 1 0.457 2 0.605 3 0.748 4 0.861 5
7 154.10 8 35.307 6 36.952 7 0.232 1 0.368 2 0.526 3 0.688 4 0.817 5
9 260.76 8 65.50 6 67.83 6 0.223 1 0.319 2 0.466 3 0.644 4 0.787 5
0.95
1 6.595 8 1.325 6 1.337 7 0.607 1 0.738 2 0.838 3 0.913 4 0.956 5
3 59.38 8 6.368 7 6.303 6 0.325 1 0.5 2 0.639 3 0.777 4 0.883 5
5 177.82 8 17.366 7 16.935 6 0.221 1 0.361 2 0.523 3 0.687 4 0.81 5
7 324.27 8 35.41 6 36.04 7 0.168 1 0.28 2 0.435 3 0.622 4 0.76 5
9 547.89 8 65.03 6 66.54 7 0.172 1 0.241 2 0.374 3 0.574 4 0.735 5
0.99
1 36.83 8 1.32 6 1.33 7 0.402 1 0.577 2 0.696 3 0.822 4 0.908 5
3 325.56 8 6.35 7 6.34 6 0.142 1 0.27 2 0.44 3 0.63 4 0.79 5
5 924.56 8 17.18 7 17.11 6 0.088 1 0.172 2 0.305 3 0.521 4 0.697 5
7 1786.94 8 35.21 6 35.57 7 0.085 1 0.132 2 0.233 3 0.438 4 0.643 5
9 3014.15 8 64.79 6 65.73 7 0.119 1 0.128 2 0.197 3 0.386 4 0.607 5
Table 8. Estimated MSEs for n = 100 and p = 16 (superscript represents ranking).
Table 8. Estimated MSEs for n = 100 and p = 16 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 1.414 8 1.073 6 1.075 7 0.719 1 0.857 2 0.915 3 0.956 4 0.978 5
3 12.569 8 3.461 7 3.427 6 0.53 1 0.667 2 0.776 3 0.87 4 0.937 5
5 36.138 8 8.735 7 8.69 6 0.391 1 0.553 2 0.676 3 0.807 4 0.898 5
7 89.506 8 16.649 6 16.773 7 0.309 1 0.466 2 0.609 3 0.752 4 0.866 5
9 117.964 8 28.381 6 29.311 7 0.263 1 0.402 2 0.554 3 0.707 4 0.835 5
0.95
1 2.975 8 1.074 6 1.076 7 0.665 1 0.807 2 0.883 3 0.939 4 0.969 5
3 26.413 8 3.458 7 3.414 6 0.439 1 0.598 2 0.711 3 0.832 4 0.914 5
5 76.136 8 8.706 7 8.684 6 0.294 1 0.45 2 0.605 3 0.749 4 0.862 5
7 146.495 8 16.637 6 16.751 7 0.224 1 0.369 2 0.528 3 0.69 4 0.824 5
9 248.853 8 28.07 6 28.991 7 0.19 1 0.31 2 0.465 3 0.643 4 0.788 5
0.99
1 16.51 8 1.074 6 1.075 7 0.051 1 0.65 2 0.767 3 0.872 4 0.934 5
3 146.35 8 3.455 7 3.417 6 0.21 1 0.379 2 0.54 3 0.696 4 0.827 5
5 423.38 8 8.729 7 8.7 6 0.124 1 0.24 2 0.396 3 0.597 4 0.748 5
7 815.47 8 16.623 6 16.918 7 0.094 1 0.177 2 0.316 3 0.527 4 0.703 5
9 1386.55 8 27.87 6 28.647 7 0.091 1 0.147 2 0.256 3 0.466 4 0.663 5
Table 9. Estimated MSEs for n = 150 and p = 16 (superscript represents ranking).
Table 9. Estimated MSEs for n = 150 and p = 16 (superscript represents ranking).
ρ σ OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
0.9
1 2.92 8 1.006 6 1.00 7 0.756 1 0.884 2 0.931 3 0.964 4 0.982 5
3 8.54 8 2.496 7 2.45 6 0.589 1 0.709 2 0.811 3 0.898 4 0.948 5
5 22.93 8 5.889 7 5.884 6 0.464 1 0.611 2 0.722 3 0.84 4 0.916 5
7 45.20 8 11.475 7 11.2 6 0.367 1 0.528 2 0.655 3 0.837 4 0.886 5
9 75.18 8 18.701 6 18.7 7 0.312 1 0.462 2 0.603 3 0.745 4 0.861 5
0.95
1 1.94 8 1.006 6 1.008 7 0.699 1 0.824 2 0.904 3 0.949 4 0.975 5
3 17.98 8 2.491 7 2.44 6 0.502 1 0.642 2 0.75 3 0.861 4 0.929 5
5 48.14 8 5.864 6 5.88 7 0.362 1 0.525 2 0.654 3 0.787 4 0.886 5
7 94.84 8 11.5 7 11.18 6 0.272 1 0.431 2 0.58 3 0.727 4 0.847 5
9 157.59 8 18.649 6 18.65 7 0.227 1 0.364 2 0.519 3 0.683 4 0.817 5
0.99
1 10.75 8 1.006 6 1.00 7 0.573 1 0.698 2 0.804 3 0.892 4 0.945 5
3 99.74 8 2.487 7 2.44 6 0.272 1 0.438 2 0.487 3 0.734 4 0.853 5
5 266.21 8 5.856 6 5.88 7 0.165 1 0.298 2 0.458 3 0.64 4 0.783 5
7 523.84 8 11.428 7 11.18 6 0.15 1 0.219 2 0.365 3 0.58 4 0.728 5
9 869.07 8 18.594 6 18.60 7 0.10 1 0.176 2 0.301 3 0.515 4 0.697 5
Table 10. Descriptive statistics for cruise-ship-info data.
Table 10. Descriptive statistics for cruise-ship-info data.
AgeTonnagePassengersLengthCabinsPassenger DensityCrew
Min4.002.320.662.790.3317.700.59
Median14.0071.8919.508.569.5739.098.15
Mean15.0071.2918.468.138.8344.197.79
Max48.00220.0054.0011.8227.0071.4321.00
Table 11. Correlation matrix for cruise-ship-info data.
Table 11. Correlation matrix for cruise-ship-info data.
AgeTonnagePassengersLengthCabinsPassengers DensityCrew
Age1.00−0.61−0.52−0.53−0.51−0.28−0.53
Tonnage−0.611.000.950.920.95−0.040.93
Passengers−0.520.951.000.880.98−0.290.92
Length−0.530.920.881.000.89−0.090.90
Cabins−0.510.950.980.981.00−0.250.95
Passenger density−0.28−0.04−0.29−0.09−0.251.00−0.16
Crew−0.530.930.920.900.95−0.161.00
Table 12. Mean square errors for different estimators for cruise-ship-info data (superscript represents ranking).
Table 12. Mean square errors for different estimators for cruise-ship-info data (superscript represents ranking).
OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
1.985 8 1.002 4 1.004 6 0.985 1 0.988 2 1.001 3 1.010 7 1.003 5
Table 13. Descriptive statistics for Body Fat Prediction data.
Table 13. Descriptive statistics for Body Fat Prediction data.
WeightNeckChestAbdomenHipThighKneeBicepsWristBody Fat
Min118.5031.1079.3069.4085.0047.2033.0024.8015.800.00
Median176.5038.0099.6590.9599.3059.0038.5032.0518.3019.20
Mean178.9237.99100.8292.5699.9059.4138.5932.2718.2319.15
Max363.1551.20136.20148.10147.7087.3049.1045.0021.4047.50
Table 14. Correlation matrix for Body Fat Prediction data.
Table 14. Correlation matrix for Body Fat Prediction data.
WeightNeckChestAbdomenHipThighKneeBicepsWristBody Fat
Weight1.0000.8310.8940.8880.9410.8690.8530.8000.7300.612
Neck0.8311.0000.7850.7540.7350.6960.6720.7310.7450.491
Chest0.8940.7851.0000.9160.8290.7300.7190.7280.6600.703
Abdomen0.8880.7540.9161.0000.8740.7670.7370.6850.6200.813
Hip0.9410.7350.8290.8741.0000.8960.8230.7390.6300.625
Thigh0.8690.6960.7300.7670.8961.0000.7990.7610.5590.560
Knee0.8530.6720.7190.7370.8230.7991.0000.6790.6650.509
Biceps0.8000.7310.7280.6850.7390.7610.6791.0000.6320.493
Wrist0.7300.7450.6600.6200.6300.5590.6650.6321.0000.347
bodyfat0.6120.4910.7030.8130.6250.5600.5090.4930.3471.000
Table 15. Mean square error for different estimators for Body Fat Prediction data (superscript represents ranking).
Table 15. Mean square error for different estimators for Body Fat Prediction data (superscript represents ranking).
OLSLASSOA. LASSOIHS1IHS2IHS3IHS4IHS5
11.781 8 1.151 7 1.116 3 0.948 1 0.968 2 1.146 5 1.138 4 1.149 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shah, I.; Naz, H.; Ali, S.; Almohaimeed, A.; Lone, S.A. A New Quantile-Based Approach for LASSO Estimation. Mathematics 2023, 11, 1452. https://doi.org/10.3390/math11061452

AMA Style

Shah I, Naz H, Ali S, Almohaimeed A, Lone SA. A New Quantile-Based Approach for LASSO Estimation. Mathematics. 2023; 11(6):1452. https://doi.org/10.3390/math11061452

Chicago/Turabian Style

Shah, Ismail, Hina Naz, Sajid Ali, Amani Almohaimeed, and Showkat Ahmad Lone. 2023. "A New Quantile-Based Approach for LASSO Estimation" Mathematics 11, no. 6: 1452. https://doi.org/10.3390/math11061452

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop