Next Article in Journal
Forgotten Factors in Knowledge Conversion and Routines: A Fuzzy Analysis of Employee Knowledge Management in Exporting Companies in Boyacá
Next Article in Special Issue
A Bayesian Variable Selection Method for Spatial Autoregressive Quantile Models
Previous Article in Journal
Buckling of Coated Functionally Graded Spherical Nanoshells Rested on Orthotropic Elastic Medium
Previous Article in Special Issue
Robust Estimation for Semi-Functional Linear Model with Autoregressive Errors
 
 
Article
Peer-Review Record

Variable Selection and Allocation in Joint Models via Gradient Boosting Techniques

Mathematics 2023, 11(2), 411; https://doi.org/10.3390/math11020411
by Colin Griesbach 1,*, Andreas Mayr 2 and Elisabeth Bergherr 1
Reviewer 1:
Reviewer 2: Anonymous
Mathematics 2023, 11(2), 411; https://doi.org/10.3390/math11020411
Submission received: 28 November 2022 / Revised: 5 January 2023 / Accepted: 9 January 2023 / Published: 12 January 2023
(This article belongs to the Special Issue Recent Advances in Computational Statistics)

Round 1

Reviewer 1 Report

The paper deals with an exciting topic and the results obtained by the authors sound great. However, the actual contribution of the paper needs to be better shown since the comparative study is not strengthened with enough comparisons with new approaches treating the same problem. Thus, the reviewer invites the authors to consider comparing their algorithm to more new state-of-the-art methods.

I have included below a few comments that I suggest for the attached paper.

1- The paper lacks the "related work" section. I suggest the authors perform an extensive study to review what people have done before.
2 - I suggest the authors add an illustrative example. It will clear up the idea of the paper.
3 - Adding the complexity of the proposed algorithm will surely be an asset.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper introduces a technique that utilizes the gradient boosting for distributional regression to build a mechanisms to assign covariates to a signle sub-period in a joint manner. Their approach is the combination of non-cyclical updating with step-length adaptation. Here are a few points tha tcame to mind:

- what is the significance of this method over their countrparts specially Bayesain techniques? Bayesain techniques have shown very promising results in this context.

- How does the introduced method scale up?  and what is the computational costs? 

- How does the hyperparamters are learned? 

- Does this algorithm converge? if does what is the non-asymptotic behavior of it? what is the convergence rate?

- Regarding the simulations, I am uncertain how this method would work if the data was very high-dimesion. The real data is a bit old and not very high-dimension. What is the complexity of the algorithm for high-dimensional data? 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

I'd like to thank the authors for the response and the revisions they made. Most of my concerns were addressed. 

Back to TopTop