entropy-logo

Journal Browser

Journal Browser

Bayesian Inference and Computation

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 38592

Special Issue Editors


E-Mail Website
Guest Editor
Departamento de Matemáticas, Facultad de Ciencias, Universidad de Extremadura, Avda. de Elvas, 06071 Badajoz, Spain
Interests: Bayesian inference; risk and decision analysis; sensitivity analysis; categorical data; simulation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dpto. de Matemáticas, Facultad de Ciencias, Universidad de Extremadura, Badajoz, Spain
Interests: bayesian statistics; extreme value theory; applied statistics; ICT
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Bayesian models have seen growth in recent years, especially after the increasing works on MCMC methods. In this Special Issue we like to cover all the areas in Bayesian analysis, from prior distributions to computational methods.

In particular, the topics of interest are:

  • Bayesian analysis of complex models;
  • Bayesian analysis of extreme data;
  • Bayesian analysis of categorical data;
  • Elicitation of prior distributions;
  • Sensitivity analysis for Bayesian models;
  • Computational methods in Bayesian analysis;
  • Bayesian decision making.

Prof. Dr. Jacinto Martín
Prof. Dr. María Isabel Parra Arévalo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • prior distributions
  • Bayesian modelling and inference
  • Bayesian decision making
  • extreme data
  • categorical data
  • sensitivity analysis

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

44 pages, 16699 KiB  
Article
A Locally Both Leptokurtic and Fat-Tailed Distribution with Application in a Bayesian Stochastic Volatility Model
by Łukasz Lenart, Anna Pajor and Łukasz Kwiatkowski
Entropy 2021, 23(6), 689; https://doi.org/10.3390/e23060689 - 30 May 2021
Cited by 1 | Viewed by 2383
Abstract
In the paper, we begin with introducing a novel scale mixture of normal distribution such that its leptokurticity and fat-tailedness are only local, with this “locality” being separately controlled by two censoring parameters. This new, locally leptokurtic and fat-tailed (LLFT) distribution makes a [...] Read more.
In the paper, we begin with introducing a novel scale mixture of normal distribution such that its leptokurticity and fat-tailedness are only local, with this “locality” being separately controlled by two censoring parameters. This new, locally leptokurtic and fat-tailed (LLFT) distribution makes a viable alternative for other, globally leptokurtic, fat-tailed and symmetric distributions, typically entertained in financial volatility modelling. Then, we incorporate the LLFT distribution into a basic stochastic volatility (SV) model to yield a flexible alternative for common heavy-tailed SV models. For the resulting LLFT-SV model, we develop a Bayesian statistical framework and effective MCMC methods to enable posterior sampling of the parameters and latent variables. Empirical results indicate the validity of the LLFT-SV specification for modelling both “non-standard” financial time series with repeating zero returns, as well as more “typical” data on the S&P 500 and DAX indices. For the former, the LLFT-SV model is also shown to markedly outperform a common, globally heavy-tailed, t-SV alternative in terms of density forecasting. Applications of the proposed distribution in more advanced SV models seem to be easily attainable. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

25 pages, 1193 KiB  
Article
A New Regression Model for the Analysis of Overdispersed and Zero-Modified Count Data
by Wesley Bertoli, Katiane S. Conceição, Marinho G. Andrade and Francisco Louzada
Entropy 2021, 23(6), 646; https://doi.org/10.3390/e23060646 - 21 May 2021
Cited by 1 | Viewed by 2940
Abstract
Count datasets are traditionally analyzed using the ordinary Poisson distribution. However, said model has its applicability limited, as it can be somewhat restrictive to handling specific data structures. In this case, the need arises for obtaining alternative models that accommodate, for example, overdispersion [...] Read more.
Count datasets are traditionally analyzed using the ordinary Poisson distribution. However, said model has its applicability limited, as it can be somewhat restrictive to handling specific data structures. In this case, the need arises for obtaining alternative models that accommodate, for example, overdispersion and zero modification (inflation/deflation at the frequency of zeros). In practical terms, these are the most prevalent structures ruling the nature of discrete phenomena nowadays. Hence, this paper’s primary goal was to jointly address these issues by deriving a fixed-effects regression model based on the hurdle version of the Poisson–Sujatha distribution. In this framework, the zero modification is incorporated by considering that a binary probability model determines which outcomes are zero-valued, and a zero-truncated process is responsible for generating positive observations. Posterior inferences for the model parameters were obtained from a fully Bayesian approach based on the g-prior method. Intensive Monte Carlo simulation studies were performed to assess the Bayesian estimators’ empirical properties, and the obtained results have been discussed. The proposed model was considered for analyzing a real dataset, and its competitiveness regarding some well-established fixed-effects models for count data was evaluated. A sensitivity analysis to detect observations that may impact parameter estimates was performed based on standard divergence measures. The Bayesian p-value and the randomized quantile residuals were considered for the task of model validation. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

20 pages, 1182 KiB  
Article
BFF: Bayesian, Fiducial, and Frequentist Analysis of Cognitive Engagement among Cognitively Impaired Older Adults
by Shevaun D. Neupert, Claire M. Growney, Xianghe Zhu, Julia K. Sorensen, Emily L. Smith and Jan Hannig
Entropy 2021, 23(4), 428; https://doi.org/10.3390/e23040428 - 06 Apr 2021
Cited by 4 | Viewed by 7704
Abstract
Engagement in cognitively demanding activities is beneficial to preserving cognitive health. Our goal was to demonstrate the utility of frequentist, Bayesian, and fiducial statistical methods for evaluating the robustness of effects in identifying factors that contribute to cognitive engagement for older adults experiencing [...] Read more.
Engagement in cognitively demanding activities is beneficial to preserving cognitive health. Our goal was to demonstrate the utility of frequentist, Bayesian, and fiducial statistical methods for evaluating the robustness of effects in identifying factors that contribute to cognitive engagement for older adults experiencing cognitive decline. We collected a total of 504 observations across two longitudinal waves of data from 28 cognitively impaired older adults. Participants’ systolic blood pressure responsivity, an index of cognitive engagement, was continuously sampled during cognitive testing. Participants reported on physical and mental health challenges and provided hair samples to assess chronic stress at each wave. Using the three statistical paradigms, we compared results from six model testing levels and longitudinal changes in health and stress predicting changes in cognitive engagement. Findings were mostly consistent across the three paradigms, providing additional confidence in determining effects. We extend selective engagement theory to cognitive impairment, noting that health challenges and stress appear to be important moderators. Further, we emphasize the utility of the Bayesian and fiducial paradigms for use with relatively small sample sizes because they are not based on asymptotic distributions. In particular, the fiducial paradigm is a useful tool because it provides more information than p values without the need to specify prior distributions, which may unduly influence the results based on a small sample. We provide the R code used to develop and implement all models. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

20 pages, 599 KiB  
Article
New Estimators of the Bayes Factor for Models with High-Dimensional Parameter and/or Latent Variable Spaces
by Anna Pajor
Entropy 2021, 23(4), 399; https://doi.org/10.3390/e23040399 - 27 Mar 2021
Cited by 1 | Viewed by 1792
Abstract
Formal Bayesian comparison of two competing models, based on the posterior odds ratio, amounts to estimation of the Bayes factor, which is equal to the ratio of respective two marginal data density values. In models with a large number of parameters and/or latent [...] Read more.
Formal Bayesian comparison of two competing models, based on the posterior odds ratio, amounts to estimation of the Bayes factor, which is equal to the ratio of respective two marginal data density values. In models with a large number of parameters and/or latent variables, they are expressed by high-dimensional integrals, which are often computationally infeasible. Therefore, other methods of evaluation of the Bayes factor are needed. In this paper, a new method of estimation of the Bayes factor is proposed. Simulation examples confirm good performance of the proposed estimators. Finally, these new estimators are used to formally compare different hybrid Multivariate Stochastic Volatility–Multivariate Generalized Autoregressive Conditional Heteroskedasticity (MSV-MGARCH) models which have a large number of latent variables. The empirical results show, among other things, that the validity of reduction of the hybrid MSV-MGARCH model to the MGARCH specification depends on the analyzed data set as well as on prior assumptions about model parameters. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

20 pages, 585 KiB  
Article
Bayesian Analysis of Finite Populations under Simple Random Sampling
by Manuel Mendoza, Alberto Contreras-Cristán and Eduardo Gutiérrez-Peña
Entropy 2021, 23(3), 318; https://doi.org/10.3390/e23030318 - 08 Mar 2021
Cited by 3 | Viewed by 2133
Abstract
Statistical methods to produce inferences based on samples from finite populations have been available for at least 70 years. Topics such as Survey Sampling and Sampling Theory have become part of the mainstream of the statistical methodology. A wide variety of sampling schemes [...] Read more.
Statistical methods to produce inferences based on samples from finite populations have been available for at least 70 years. Topics such as Survey Sampling and Sampling Theory have become part of the mainstream of the statistical methodology. A wide variety of sampling schemes as well as estimators are now part of the statistical folklore. On the other hand, while the Bayesian approach is now a well-established paradigm with implications in almost every field of the statistical arena, there does not seem to exist a conventional procedure—able to deal with both continuous and discrete variables—that can be used as a kind of default for Bayesian survey sampling, even in the simple random sampling case. In this paper, the Bayesian analysis of samples from finite populations is discussed, its relationship with the notion of superpopulation is reviewed, and a nonparametric approach is proposed. Our proposal can produce inferences for population quantiles and similar quantities of interest in the same way as for population means and totals. Moreover, it can provide results relatively quickly, which may prove crucial in certain contexts such as the analysis of quick counts in electoral settings. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

18 pages, 965 KiB  
Article
Phylogenetic Curved Optimal Regression for Adaptive Trait Evolution
by Dwueng-Chwuan Jhwueng and Chih-Ping Wang
Entropy 2021, 23(2), 218; https://doi.org/10.3390/e23020218 - 10 Feb 2021
Cited by 1 | Viewed by 1942
Abstract
Regression analysis using line equations has been broadly applied in studying the evolutionary relationship between the response trait and its covariates. However, the characteristics among closely related species in nature present abundant diversities where the nonlinear relationship between traits have been frequently observed. [...] Read more.
Regression analysis using line equations has been broadly applied in studying the evolutionary relationship between the response trait and its covariates. However, the characteristics among closely related species in nature present abundant diversities where the nonlinear relationship between traits have been frequently observed. By treating the evolution of quantitative traits along a phylogenetic tree as a set of continuous stochastic variables, statistical models for describing the dynamics of the optimum of the response trait and its covariates are built herein. Analytical representations for the response trait variables, as well as their optima among a group of related species, are derived. Due to the models’ lack of tractable likelihood, a procedure that implements the Approximate Bayesian Computation (ABC) technique is applied for statistical inference. Simulation results show that the new models perform well where the posterior means of the parameters are close to the true parameters. Empirical analysis supports the new models when analyzing the trait relationship among kangaroo species. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

28 pages, 461 KiB  
Article
Measuring and Controlling Bias for Some Bayesian Inferences and the Relation to Frequentist Criteria
by Michael Evans and Yang Guo
Entropy 2021, 23(2), 190; https://doi.org/10.3390/e23020190 - 04 Feb 2021
Cited by 3 | Viewed by 1763
Abstract
A common concern with Bayesian methodology in scientific contexts is that inferences can be heavily influenced by subjective biases. As presented here, there are two types of bias for some quantity of interest: bias against and bias in favor. Based upon the principle [...] Read more.
A common concern with Bayesian methodology in scientific contexts is that inferences can be heavily influenced by subjective biases. As presented here, there are two types of bias for some quantity of interest: bias against and bias in favor. Based upon the principle of evidence, it is shown how to measure and control these biases for both hypothesis assessment and estimation problems. Optimality results are established for the principle of evidence as the basis of the approach to these problems. A close relationship is established between measuring bias in Bayesian inferences and frequentist properties that hold for any proper prior. This leads to a possible resolution to an apparent conflict between these approaches to statistical reasoning. Frequentism is seen as establishing figures of merit for a statistical study, while Bayes determines the inferences based upon statistical evidence. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

13 pages, 1097 KiB  
Article
Improvement of Bobrovsky–Mayor–Wolf–Zakai Bound
by Ken-ichi Koike and Shintaro Hashimoto
Entropy 2021, 23(2), 161; https://doi.org/10.3390/e23020161 - 28 Jan 2021
Viewed by 1471
Abstract
This paper presents a difference-type lower bound for the Bayes risk as a difference-type extension of the Borovkov–Sakhanenko bound. The resulting bound asymptotically improves the Bobrovsky–Mayor–Wolf–Zakai bound which is difference-type extension of the Van Trees bound. Some examples are also given. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

14 pages, 14020 KiB  
Article
Bayesian Estimation of Geometric Morphometric Landmarks for Simultaneous Localization of Multiple Anatomies in Cardiac CT Images
by Byunghwan Jeon, Sunghee Jung, Hackjoon Shim and Hyuk-Jae Chang
Entropy 2021, 23(1), 64; https://doi.org/10.3390/e23010064 - 02 Jan 2021
Cited by 1 | Viewed by 2189
Abstract
We propose a robust method to simultaneously localize multiple objects in cardiac computed tomography angiography (CTA) images. The relative prior distributions of the multiple objects in the three-dimensional (3D) space can be obtained through integrating the geometric morphological relationship of each target object [...] Read more.
We propose a robust method to simultaneously localize multiple objects in cardiac computed tomography angiography (CTA) images. The relative prior distributions of the multiple objects in the three-dimensional (3D) space can be obtained through integrating the geometric morphological relationship of each target object to some reference objects. In cardiac CTA images, the cross-sections of ascending and descending aorta can play the role of the reference objects. We employed the maximum a posteriori (MAP) estimator that utilizes anatomic prior knowledge to address this problem of localizing multiple objects. We propose a new feature for each pixel using the relative distances, which can define any objects that have unclear boundaries. Our experimental results targeting four pulmonary veins (PVs) and the left atrial appendage (LAA) in cardiac CTA images demonstrate the robustness of the proposed method. The method could also be extended to localize other multiple objects in different applications. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

10 pages, 334 KiB  
Article
A Two-Stage Approach for Bayesian Joint Models of Longitudinal and Survival Data: Correcting Bias with Informative Prior
by Valeria Leiva-Yamaguchi and Danilo Alvares
Entropy 2021, 23(1), 50; https://doi.org/10.3390/e23010050 - 31 Dec 2020
Cited by 2 | Viewed by 2127
Abstract
Joint models of longitudinal and survival outcomes have gained much popularity in recent years, both in applications and in methodological development. This type of modelling is usually characterised by two submodels, one longitudinal (e.g., mixed-effects model) and one survival (e.g., Cox model), which [...] Read more.
Joint models of longitudinal and survival outcomes have gained much popularity in recent years, both in applications and in methodological development. This type of modelling is usually characterised by two submodels, one longitudinal (e.g., mixed-effects model) and one survival (e.g., Cox model), which are connected by some common term. Naturally, sharing information makes the inferential process highly time-consuming. In particular, the Bayesian framework requires even more time for Markov chains to reach stationarity. Hence, in order to reduce the modelling complexity while maintaining the accuracy of the estimates, we propose a two-stage strategy that first fits the longitudinal submodel and then plug the shared information into the survival submodel. Unlike a standard two-stage approach, we apply a correction by incorporating an individual and multiplicative fixed-effect with informative prior into the survival submodel. Based on simulation studies and sensitivity analyses, we empirically compare our proposal with joint specification and standard two-stage approaches. The results show that our methodology is very promising, since it reduces the estimation bias compared to the other two-stage method and requires less processing time than the joint specification approach. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

19 pages, 540 KiB  
Article
On Default Priors for Robust Bayesian Estimation with Divergences
by Tomoyuki Nakagawa and Shintaro Hashimoto
Entropy 2021, 23(1), 29; https://doi.org/10.3390/e23010029 - 27 Dec 2020
Viewed by 1652
Abstract
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum γ-divergence estimator is well-known to work well in estimation against heavy contamination. The robust Bayesian methods by using quasi-posterior distributions based on divergences have been also [...] Read more.
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum γ-divergence estimator is well-known to work well in estimation against heavy contamination. The robust Bayesian methods by using quasi-posterior distributions based on divergences have been also proposed in recent years. In the objective Bayesian framework, the selection of default prior distributions under such quasi-posterior distributions is an important problem. In this study, we provide some properties of reference and moment matching priors under the quasi-posterior distribution based on the γ-divergence. In particular, we show that the proposed priors are approximately robust under the condition on the contamination distribution without assuming any conditions on the contamination ratio. Some simulation studies are also presented. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

18 pages, 584 KiB  
Article
Baseline Methods for Bayesian Inference in Gumbel Distribution
by Jacinto Martín, María Isabel Parra, Mario Martínez Pizarro and Eva L. Sanjuán
Entropy 2020, 22(11), 1267; https://doi.org/10.3390/e22111267 - 07 Nov 2020
Cited by 2 | Viewed by 2032
Abstract
Usual estimation methods for the parameters of extreme value distributions only employ a small part of the observation values. When block maxima values are considered, many data are discarded, and therefore a lot of information is wasted. We develop a model to seize [...] Read more.
Usual estimation methods for the parameters of extreme value distributions only employ a small part of the observation values. When block maxima values are considered, many data are discarded, and therefore a lot of information is wasted. We develop a model to seize the whole data available in an extreme value framework. The key is to take advantage of the existing relation between the baseline parameters and the parameters of the block maxima distribution. We propose two methods to perform Bayesian estimation. Baseline distribution method (BDM) consists in computing estimations for the baseline parameters with all the data, and then making a transformation to compute estimations for the block maxima parameters. Improved baseline method (IBDM) is a refinement of the initial idea, with the aim of assigning more importance to the block maxima data than to the baseline values, performed by applying BDM to develop an improved prior distribution. We compare empirically these new methods with the Standard Bayesian analysis with non-informative prior, considering three baseline distributions that lead to a Gumbel extreme distribution, namely Gumbel, Exponential and Normal, by a broad simulation study. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

34 pages, 451 KiB  
Article
Bayesian Inference for the Kumaraswamy Distribution under Generalized Progressive Hybrid Censoring
by Jiayi Tu and Wenhao Gui
Entropy 2020, 22(9), 1032; https://doi.org/10.3390/e22091032 - 15 Sep 2020
Cited by 7 | Viewed by 2115
Abstract
Incomplete data are unavoidable for survival analysis as well as life testing, so more and more researchers are beginning to study censoring data. This paper discusses and considers the estimation of unknown parameters featured by the Kumaraswamy distribution on the condition of generalized [...] Read more.
Incomplete data are unavoidable for survival analysis as well as life testing, so more and more researchers are beginning to study censoring data. This paper discusses and considers the estimation of unknown parameters featured by the Kumaraswamy distribution on the condition of generalized progressive hybrid censoring scheme. Estimation of reliability is also considered in this paper. To begin with, the maximum likelihood estimators are derived. In addition, Bayesian estimators under not only symmetric but also asymmetric loss functions, like general entropy, squared error as well as linex loss function, are also offered. Since the Bayesian estimates fail to be of explicit computation, Lindley approximation, as well as the Tierney and Kadane method, is employed to obtain the Bayesian estimates. A simulation research is conducted for the comparison of the effectiveness of the proposed estimators. A real-life example is employed for illustration. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

9 pages, 320 KiB  
Article
A Dirichlet Process Prior Approach for Covariate Selection
by Stefano Cabras
Entropy 2020, 22(9), 948; https://doi.org/10.3390/e22090948 - 28 Aug 2020
Viewed by 1729
Abstract
The variable selection problem in general, and specifically for the ordinary linear regression model, is considered in the setup in which the number of covariates is large enough to prevent the exploration of all possible models. In this context, Gibbs-sampling is needed to [...] Read more.
The variable selection problem in general, and specifically for the ordinary linear regression model, is considered in the setup in which the number of covariates is large enough to prevent the exploration of all possible models. In this context, Gibbs-sampling is needed to perform stochastic model exploration to estimate, for instance, the model inclusion probability. We show that under a Bayesian non-parametric prior model for analyzing Gibbs-sampling output, the usual empirical estimator is just the asymptotic version of the expected posterior inclusion probability given the simulation output from Gibbs-sampling. Other posterior conditional estimators of inclusion probabilities can also be considered as related to the latent probabilities distributions on the model space which can be sampled given the observed Gibbs-sampling output. This paper will also compare, in this large model space setup the conventional prior approach against the non-local prior approach used to define the Bayes Factors for model selection. The approach is exposed along with simulation samples and also an application of modeling the Travel and Tourism factors all over the world. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

14 pages, 887 KiB  
Article
Robust Bayesian Regression with Synthetic Posterior Distributions
by Shintaro Hashimoto and Shonosuke Sugasawa
Entropy 2020, 22(6), 661; https://doi.org/10.3390/e22060661 - 15 Jun 2020
Cited by 5 | Viewed by 3132
Abstract
Although linear regression models are fundamental tools in statistical science, the estimation results can be sensitive to outliers. While several robust methods have been proposed in frequentist frameworks, statistical inference is not necessarily straightforward. We here propose a Bayesian approach to robust inference [...] Read more.
Although linear regression models are fundamental tools in statistical science, the estimation results can be sensitive to outliers. While several robust methods have been proposed in frequentist frameworks, statistical inference is not necessarily straightforward. We here propose a Bayesian approach to robust inference on linear regression models using synthetic posterior distributions based on γ-divergence, which enables us to naturally assess the uncertainty of the estimation through the posterior distribution. We also consider the use of shrinkage priors for the regression coefficients to carry out robust Bayesian variable selection and estimation simultaneously. We develop an efficient posterior computation algorithm by adopting the Bayesian bootstrap within Gibbs sampling. The performance of the proposed method is illustrated through simulation studies and applications to famous datasets. Full article
(This article belongs to the Special Issue Bayesian Inference and Computation)
Show Figures

Figure 1

Back to TopTop