Statistical learning and Its Applications

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: closed (15 August 2023) | Viewed by 1861

Special Issue Editors


E-Mail Website
Guest Editor
School of Social Sciences, Edinburgh Business School, Heriot-Watt University, Edinburgh EH14 4AS, Scotland, UK
Interests: mixture models; distribution theory; EM algorithm; univariate; multivariate and copula-based claim frequency

E-Mail Website
Guest Editor
School of Social Sciences, Edinburgh Business School, Heriot-Watt University, Edinburgh EH14 4AS, Scotland, UK
Interests: mixture models; distribution theory; EM algorithm; univariate; multivariate and copula-based claim frequency

Special Issue Information

Dear Colleagues,

The past decade has seen the emergence of a range of new technologies and big data analytics which have begun to reshape the landscape of predictive modeling in practice and in research in many areas.

Statistical learning combines computational statistics with machine learning techniques and can provide excellent predictive performances without the tedious procedure of feature engineering and learn non-linearities in the input data and interactions between these. Thus, it can enable researchers to develop a framework for analyzing the stylized characteristics of data sets across an abundance of different areas, such as biology, epidemiology, criminology, meteorology, seismology, sports science, decarbonization, forecasting of weather-related hazards due to climate change, finance, and insurance.

This Special Issue aims to showcase novel applications of the most recent statistical learning techniques for developing better data-driven methods on studying practical problems in the aforementioned areas. Special emphasis should be given to hybrid models, which are based on combinations of neural networks (NNs) with statistical models. Hybrid models can have spectacular applications due to their ability to model large data sets with a large number of input features and because they can directly deal with unstructured data instead of structuring, or aggregating, which results in losing individual information. An outstanding issue is understanding how they overcome the caveat of dimensionality to generate or classify data.

Dr. Dimitrios Christopoulos
Dr. George Tzougas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • statistical learning
  • computational statistics
  • machine learning
  • data-driven methods

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 558 KiB  
Article
Model-Robust Estimation of Multiple-Group Structural Equation Models
by Alexander Robitzsch
Algorithms 2023, 16(4), 210; https://doi.org/10.3390/a16040210 - 17 Apr 2023
Cited by 6 | Viewed by 1403
Abstract
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an [...] Read more.
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an SEM as a function of a discrete grouping variable. Multiple-group SEM is employed to compare structural relationships between groups. In this article, estimation approaches for the multiple-group are reviewed. We focus on comparing different estimation strategies in the presence of local model misspecifications (i.e., model errors). In detail, maximum likelihood and weighted least-squares estimation approaches are compared with a newly proposed robust Lp loss function and regularized maximum likelihood estimation. The latter methods are referred to as model-robust estimators because they show some resistance to model errors. In particular, we focus on the performance of the different estimators in the presence of unmodelled residual error correlations and measurement noninvariance (i.e., group-specific item intercepts). The performance of the different estimators is compared in two simulation studies and an empirical example. It turned out that the robust loss function approach is computationally much less demanding than regularized maximum likelihood estimation but resulted in similar statistical performance. Full article
(This article belongs to the Special Issue Statistical learning and Its Applications)
Show Figures

Figure 1

Back to TopTop