Bayesian and Frequentist Model Averaging

A special issue of Econometrics (ISSN 2225-1146).

Deadline for manuscript submissions: closed (30 September 2019) | Viewed by 64360

Special Issue Editors


E-Mail Website
Guest Editor
Department of Statistics, University of Warwick, United Kingdom
Interests: theoretical and applied Bayesian statistics, particularly distribution theory, Bayesian model averaging, spatial statistics, non- and semiparametric inference, survival models, stochastic frontier models.

E-Mail Website
Guest Editor
Department of Econometrics and Operations Research, Vrije Universiteit Amsterdam, The Netherlands
Interests: econometric theory; model averaging; risk; environmental economics; matrix calculus

E-Mail Website
Guest Editor
Department of Economics and Finance, Universidad de Castilla-La Mancha, Spain
Interests: statistical analysis of computer models; model uncertainty; model and variable selection; objective Bayesian methods

E-Mail Website
Guest Editor
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
Interests: model averaging; model selection; forecast combination; mixed-effects models; dimension reduction; differential equation models

Special Issue Information

Dear Colleagues,

This Special Issue aims at gathering original contributions on model averaging methods and their applications in econometrics. We focus on model averaging as a response to model uncertainty, which is an inherent aspect of modelling.  The weights used for averaging are often derived from Bayes theorem (Bayesian model averaging) or from sampling-theoretic optimality considerations (frequentist model averaging). We will also consider methods that combine aspects of both frequentist and Bayesian reasoning, such as weighted average least squares. We would like to invite you to submit papers in this general area that make contributions of a methodological or applied nature, or both, especially (but not necessarily) in settings that extend the more or less well-understood normal linear regression model. Papers that deal with the computational issues induced by very large model spaces and/or complex model structures are also welcomed. Papers that create additional insight by comparing various methodologies in challenging and relevant settings could also be of interest.

Over the last two decades, model averaging has become an increasingly popular approach to dealing with model uncertainty in economics, and we believe this Special Issue can make an important contribution to firmly establishing the methodologies based on model averaging as an important and well-understood part of the standard econometrics toolbox.

Prof. Dr. Mark F.J. Steel
Prof. Dr. Jan R. Magnus
Prof. Dr. Gonzalo García-Donato
Prof. Dr. Xinyu Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. For this Special Issue, we particularly invite research articles, but do not entirely exclude review articles and short communications.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Econometrics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charges do not apply for this Special Issue, resulting in no charge to authors. Submitted papers should preferably be prepared in LaTeX, and written in (standard academic) high-level English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Model averaging
  • Model uncertainty
  • Bayesian model averaging
  • Frequentist model averaging
  • Weighted average least squares
  • Markov chain Monte Carlo methods

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

28 pages, 3560 KiB  
Article
Confidence Distributions for FIC Scores
by Céline Cunen and Nils Lid Hjort
Econometrics 2020, 8(3), 27; https://doi.org/10.3390/econometrics8030027 - 01 Jul 2020
Viewed by 3163
Abstract
When using the Focused Information Criterion (FIC) for assessing and ranking candidate models with respect to how well they do for a given estimation task, it is customary to produce a so-called FIC plot. This plot has the different point estimates along the [...] Read more.
When using the Focused Information Criterion (FIC) for assessing and ranking candidate models with respect to how well they do for a given estimation task, it is customary to produce a so-called FIC plot. This plot has the different point estimates along the y-axis and the root-FIC scores on the x-axis, these being the estimated root-mean-square scores. In this paper we address the estimation uncertainty involved in each of the points of such a FIC plot. This needs careful assessment of each of the estimators from the candidate models, taking also modelling bias into account, along with the relative precision of the associated estimated mean squared error quantities. We use confidence distributions for these tasks. This leads to fruitful CD–FIC plots, helping the statistician to judge to what extent the seemingly best models really are better than other models, etc. These efforts also lead to two further developments. The first is a new tool for model selection, which we call the quantile-FIC, which helps overcome certain difficulties associated with the usual FIC procedures, related to somewhat arbitrary schemes for handling estimated squared biases. A particular case is the median-FIC. The second development is to form model averaged estimators with weights determined by the relative sizes of the median- and quantile-FIC scores. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

15 pages, 1659 KiB  
Article
Bayesian Model Averaging with the Integrated Nested Laplace Approximation
by Virgilio Gómez-Rubio, Roger S. Bivand and Håvard Rue
Econometrics 2020, 8(2), 23; https://doi.org/10.3390/econometrics8020023 - 01 Jun 2020
Cited by 15 | Viewed by 4760
Abstract
The integrated nested Laplace approximation (INLA) for Bayesian inference is an efficient approach to estimate the posterior marginal distributions of the parameters and latent effects of Bayesian hierarchical models that can be expressed as latent Gaussian Markov random fields (GMRF). The representation as [...] Read more.
The integrated nested Laplace approximation (INLA) for Bayesian inference is an efficient approach to estimate the posterior marginal distributions of the parameters and latent effects of Bayesian hierarchical models that can be expressed as latent Gaussian Markov random fields (GMRF). The representation as a GMRF allows the associated software R-INLA to estimate the posterior marginals in a fraction of the time as typical Markov chain Monte Carlo algorithms. INLA can be extended by means of Bayesian model averaging (BMA) to increase the number of models that it can fit to conditional latent GMRF. In this paper, we review the use of BMA with INLA and propose a new example on spatial econometrics models. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

24 pages, 333 KiB  
Article
Sovereign Risk Indices and Bayesian Theory Averaging
by Alex Lenkoski and Fredrik L. Aanes
Econometrics 2020, 8(2), 22; https://doi.org/10.3390/econometrics8020022 - 29 May 2020
Cited by 1 | Viewed by 3137
Abstract
In economic applications, model averaging has found principal use in examining the validity of various theories related to observed heterogeneity in outcomes such as growth, development, and trade. Though often easy to articulate, these theories are imperfectly captured quantitatively. A number of different [...] Read more.
In economic applications, model averaging has found principal use in examining the validity of various theories related to observed heterogeneity in outcomes such as growth, development, and trade. Though often easy to articulate, these theories are imperfectly captured quantitatively. A number of different proxies are often collected for a given theory and the uneven nature of this collection requires care when employing model averaging. Furthermore, if valid, these theories ought to be relevant outside of any single narrowly focused outcome equation. We propose a methodology which treats theories as represented by latent indices, these latent processes controlled by model averaging on the proxy level. To achieve generalizability of the theory index our framework assumes a collection of outcome equations. We accommodate a flexible set of generalized additive models, enabling non-Gaussian outcomes to be included. Furthermore, selection of relevant theories also occurs on the outcome level, allowing for theories to be differentially valid. Our focus is on creating a set of theory-based indices directed at understanding a country’s potential risk of macroeconomic collapse. These Sovereign Risk Indices are calibrated across a set of different “collapse” criteria, including default on sovereign debt, heightened potential for high unemployment or inflation and dramatic swings in foreign exchange values. The goal of this exercise is to render a portable set of country/year theory indices which can find more general use in the research community. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
29 pages, 540 KiB  
Article
BACE and BMA Variable Selection and Forecasting for UK Money Demand and Inflation with Gretl
by Marcin Błażejowski, Jacek Kwiatkowski and Paweł Kufel
Econometrics 2020, 8(2), 21; https://doi.org/10.3390/econometrics8020021 - 22 May 2020
Cited by 2 | Viewed by 4507
Abstract
In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA [...] Read more.
In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA selection and exhibit similar or better forecasting performance compared with a non-pooling approach. As a benchmark, we use Autometrics—an algorithm for automatic model selection. Our study is implemented in the easy-to-use gretl packages, which support parallel processing, automates numerical calculations, and allows for efficient computations. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

36 pages, 1528 KiB  
Article
Triple the Gamma—A Unifying Shrinkage Prior for Variance and Variable Selection in Sparse State Space and TVP Models
by Annalisa Cadonna, Sylvia Frühwirth-Schnatter and Peter Knaus
Econometrics 2020, 8(2), 20; https://doi.org/10.3390/econometrics8020020 - 20 May 2020
Cited by 26 | Viewed by 6468
Abstract
Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of explanatory variables on the outcome variable. However, in particular when the number of explanatory variables is large, there is a known risk of overfitting and poor predictive performance, [...] Read more.
Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of explanatory variables on the outcome variable. However, in particular when the number of explanatory variables is large, there is a known risk of overfitting and poor predictive performance, since the effect of some explanatory variables is constant over time. We propose a new prior for variance shrinkage in TVP models, called triple gamma. The triple gamma prior encompasses a number of priors that have been suggested previously, such as the Bayesian Lasso, the double gamma prior and the Horseshoe prior. We present the desirable properties of such a prior and its relationship to Bayesian Model Averaging for variance selection. The features of the triple gamma prior are then illustrated in the context of time varying parameter vector autoregressive models, both for simulated dataset and for a series of macroeconomics variables in the Euro Area. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

15 pages, 418 KiB  
Article
Bayesian Model Averaging Using Power-Expected-Posterior Priors
by Dimitris Fouskakis and Ioannis Ntzoufras
Econometrics 2020, 8(2), 17; https://doi.org/10.3390/econometrics8020017 - 11 May 2020
Cited by 1 | Viewed by 3895
Abstract
This paper focuses on the Bayesian model average (BMA) using the power–expected– posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. [...] Read more.
This paper focuses on the Bayesian model average (BMA) using the power–expected– posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. We compare the performance of our method with that of similar approaches in a simulated and a real data example from economics. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

22 pages, 496 KiB  
Article
Improved Average Estimation in Seemingly Unrelated Regressions
by Ali Mehrabani and Aman Ullah
Econometrics 2020, 8(2), 15; https://doi.org/10.3390/econometrics8020015 - 27 Apr 2020
Cited by 4 | Viewed by 4555
Abstract
In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional [...] Read more.
In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional to a weighted quadratic loss function. The approximate bias and second moment matrix of the average estimator using the large-sample approximations are provided. We give the conditions under which the average estimator dominates the GLS estimator on the basis of their mean squared errors. We illustrate our estimator by applying it to a cost system for United States (U.S.) Commercial banks, over the period from 2000 to 2018. Our results indicate that on average most of the banks have been operating under increasing returns to scale. We find that over the recent years, scale economies are a plausible reason for the growth in average size of banks and the tendency toward increasing scale is likely to continue Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

22 pages, 2908 KiB  
Article
Bayesian Model Averaging and Prior Sensitivity in Stochastic Frontier Analysis
by Kamil Makieła and Błażej Mazur
Econometrics 2020, 8(2), 13; https://doi.org/10.3390/econometrics8020013 - 20 Apr 2020
Cited by 3 | Viewed by 4447
Abstract
This paper discusses Bayesian model averaging (BMA) in Stochastic Frontier Analysis and investigates inference sensitivity to prior assumptions made about the scale parameter of (in)efficiency. We turn our attention to the “standard” prior specifications for the popular normal-half-normal and normal-exponential models. To facilitate [...] Read more.
This paper discusses Bayesian model averaging (BMA) in Stochastic Frontier Analysis and investigates inference sensitivity to prior assumptions made about the scale parameter of (in)efficiency. We turn our attention to the “standard” prior specifications for the popular normal-half-normal and normal-exponential models. To facilitate formal model comparison, we propose a model that nests both sampling models and generalizes the symmetric term of the compound error. Within this setup it is possible to develop coherent priors for model parameters in an explicit way. We analyze sensitivity of different prior specifications on the aforementioned scale parameter with respect to posterior characteristics of technology, stochastic parameters, latent variables and—especially—the models’ posterior probabilities, which are crucial for adequate inference pooling. We find that using incoherent priors on the scale parameter of inefficiency has (i) virtually no impact on the technology parameters; (ii) some impact on inference about the stochastic parameters and latent variables and (iii) substantial impact on marginal data densities, which are crucial in BMA. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

35 pages, 996 KiB  
Article
Cross-Validation Model Averaging for Generalized Functional Linear Model
by Haili Zhang and Guohua Zou
Econometrics 2020, 8(1), 7; https://doi.org/10.3390/econometrics8010007 - 24 Feb 2020
Cited by 5 | Viewed by 4553
Abstract
Functional data is a common and important type in econometrics and has been easier and easier to collect in the big data era. To improve estimation accuracy and reduce forecast risks with functional data, in this paper, we propose a novel cross-validation model [...] Read more.
Functional data is a common and important type in econometrics and has been easier and easier to collect in the big data era. To improve estimation accuracy and reduce forecast risks with functional data, in this paper, we propose a novel cross-validation model averaging method for generalized functional linear model where the scalar response variable is related to a random function predictor by a link function. We establish asymptotic theoretical result on the optimality of the weights selected by our method when the true model is not in the candidate model set. Our simulations show that the proposed method often performs better than the commonly used model selection and averaging methods. We also apply the proposed method to Beijing second-hand house price data. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

20 pages, 352 KiB  
Article
Forecast Bitcoin Volatility with Least Squares Model Averaging
by Tian Xie
Econometrics 2019, 7(3), 40; https://doi.org/10.3390/econometrics7030040 - 14 Sep 2019
Cited by 6 | Viewed by 6497
Abstract
In this paper, we study forecasting problems of Bitcoin-realized volatility computed on data from the largest crypto exchange—Binance. Given the unique features of the crypto asset market, we find that conventional regression models exhibit strong model specification uncertainty. To circumvent this issue, we [...] Read more.
In this paper, we study forecasting problems of Bitcoin-realized volatility computed on data from the largest crypto exchange—Binance. Given the unique features of the crypto asset market, we find that conventional regression models exhibit strong model specification uncertainty. To circumvent this issue, we suggest using least squares model-averaging methods to model and forecast Bitcoin volatility. The empirical results demonstrate that least squares model-averaging methods in general outperform many other conventional regression models that ignore specification uncertainty. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

26 pages, 1804 KiB  
Article
On the Forecast Combination Puzzle
by Wei Qian, Craig A. Rolling, Gang Cheng and Yuhong Yang
Econometrics 2019, 7(3), 39; https://doi.org/10.3390/econometrics7030039 - 10 Sep 2019
Cited by 6 | Viewed by 6651
Abstract
It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the “forecast combination puzzle”. Motivated by this puzzle, we explore its possible explanations, [...] Read more.
It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the “forecast combination puzzle”. Motivated by this puzzle, we explore its possible explanations, including high variance in estimating the target optimal weights (estimation error), invalid weighting formulas, and model/candidate screening before combination. We show that the existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without considering the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to reduce the heavy cost of estimation error and, to a large extent, mitigate the puzzle. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

12 pages, 320 KiB  
Article
A Combination Method for Averaging OLS and GLS Estimators
by Qingfeng Liu and Andrey L. Vasnev
Econometrics 2019, 7(3), 38; https://doi.org/10.3390/econometrics7030038 - 09 Sep 2019
Cited by 1 | Viewed by 4884
Abstract
To avoid the risk of misspecification between homoscedastic and heteroscedastic models, we propose a combination method based on ordinary least-squares (OLS) and generalized least-squares (GLS) model-averaging estimators. To select optimal weights for the combination, we suggest two information criteria and propose feasible versions [...] Read more.
To avoid the risk of misspecification between homoscedastic and heteroscedastic models, we propose a combination method based on ordinary least-squares (OLS) and generalized least-squares (GLS) model-averaging estimators. To select optimal weights for the combination, we suggest two information criteria and propose feasible versions that work even when the variance-covariance matrix is unknown. The optimality of the method is proven under some regularity conditions. The results of a Monte Carlo simulation demonstrate that the method is adaptive in the sense that it achieves almost the same estimation accuracy as if the homoscedasticity or heteroscedasticity of the error term were known. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

Other

Jump to: Research

21 pages, 437 KiB  
Tutorial
A Review of the ‘BMS’ Package for R with Focus on Jointness
by Shahram Amini and Christopher F. Parmeter
Econometrics 2020, 8(1), 6; https://doi.org/10.3390/econometrics8010006 - 24 Feb 2020
Cited by 5 | Viewed by 5590
Abstract
We provide a general overview of Bayesian model averaging (BMA) along with the concept of jointness. We then describe the relative merits and attractiveness of the newest BMA software package, BMS, available in the statistical language R to implement a BMA exercise. BMS [...] Read more.
We provide a general overview of Bayesian model averaging (BMA) along with the concept of jointness. We then describe the relative merits and attractiveness of the newest BMA software package, BMS, available in the statistical language R to implement a BMA exercise. BMS provides the user a wide range of customizable priors for conducting a BMA exercise, provides ample graphs to visualize results, and offers several alternative model search mechanisms. We also provide an application of the BMS package to equity premia and describe a simple function that can easily ascertain jointness measures of covariates and integrates with the BMS package. Full article
(This article belongs to the Special Issue Bayesian and Frequentist Model Averaging)
Show Figures

Figure 1

Back to TopTop