Journal Description
Physical Sciences Forum
Physical Sciences Forum
is an open access journal dedicated to publishing findings resulting from academic conferences, workshops and similar events in the area of physical sciences. Each conference proceeding can be individually indexed, is citable via a digital object identifier (DOI) and freely available under an open access license. The conference organizers and proceedings editors are responsible for managing the peer-review process and selecting papers for conference proceedings.
Latest Articles
Analysis of Ecological Networks: Linear Inverse Modeling and Information Theory Tools
Phys. Sci. Forum 2023, 9(1), 24; https://doi.org/10.3390/psf2023009024 - 20 Feb 2024
Abstract
In marine ecology, the most studied interactions are trophic and are in networks called food webs. Trophic modeling is mainly based on weighted networks, where each weighted edge corresponds to a flow of organic matter between two trophic compartments, containing individuals of similar
[...] Read more.
In marine ecology, the most studied interactions are trophic and are in networks called food webs. Trophic modeling is mainly based on weighted networks, where each weighted edge corresponds to a flow of organic matter between two trophic compartments, containing individuals of similar feeding behaviors and metabolisms and with the same predators. To take into account the unknown flow values within food webs, a class of methods called Linear Inverse Modeling was developed. The total linear constraints, equations and inequations defines a multidimensional convex-bounded polyhedron, called a polytope, within which lie all realistic solutions to the problem. To describe this polytope, a possible method is to calculate a representative sample of solutions by using the Monte Carlo Markov Chain approach. In order to extract a unique solution from the simulated sample, several goal (cost) functions—also called Ecological Network Analysis indices—have been introduced in the literature as criteria of fitness to the ecosystems. These tools are all related to information theory. Here we introduce new functions that potentially provide a better fit of the estimated model to the ecosystem.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
Open AccessProceeding Paper
Development of a Clock Generation and Time Distribution System for Hyper-Kamiokande
by
Lucile Mellet, Mathieu Guigue, Boris Popov, Stefano Russo and Vincent Voisin
Phys. Sci. Forum 2023, 8(1), 72; https://doi.org/10.3390/psf2023008072 - 18 Jan 2024
Abstract
►▼
Show Figures
The construction of the next-generation water Cherenkov detector Hyper-Kamiokande (HK) has started. It will have about a ten times larger fiducial volume compared to the existing Super-Kamiokande detector, as well as increased detection performances. The data collection process is planned from 2027 onwards.
[...] Read more.
The construction of the next-generation water Cherenkov detector Hyper-Kamiokande (HK) has started. It will have about a ten times larger fiducial volume compared to the existing Super-Kamiokande detector, as well as increased detection performances. The data collection process is planned from 2027 onwards. Time stability is crucial, as detecting physics events relies on reconstructing Cherenkov rings based on the coincidence between the photomultipliers. The above requires a distributed clock jitter at each endpoint that is smaller than 100 ps. In addition, since this detector will be mainly used to detect neutrinos produced by the J-PARC accelerator in Tokai, each event needs to be timed-tagged with a precision better than 100 ns, with respect to UTC, in order to be associated with a proton spill from J-PARC or the events observed in other detectors for multi-messenger astronomy. The HK collaboration is in an R&D phase and several groups are working in parallel for the electronics system. This proceeding will present the studies performed at LPNHE (Paris) related to a novel design for the time synchronization system in Kamioka with respect to the previous KamiokaNDE series of experiments. We will discuss the clock generation, including the connection scheme between the GNSS receiver (Septentrio) and the atomic clock (free-running Rubidium), the precise calibration of the atomic clock and algorithms to account for errors on satellites orbits, the redundancy of the system, and a two-stage distribution system that sends the clock and various timing-sensitive information to each front-end electronics module, using a custom protocol.
Full article
Figure 1
Open AccessProceeding Paper
Preconditioned Monte Carlo for Gradient-Free Bayesian Inference in the Physical Sciences
by
Minas Karamanis and Uroš Seljak
Phys. Sci. Forum 2023, 9(1), 23; https://doi.org/10.3390/psf2023009023 - 09 Jan 2024
Cited by 1
Abstract
We present preconditioned Monte Carlo (PMC), a novel Monte Carlo method for Bayesian inference in complex probability distributions. PMC incorporates a normalizing flow (NF) and an adaptive Sequential Monte Carlo (SMC) scheme, along with a novel past resampling scheme to boost the number
[...] Read more.
We present preconditioned Monte Carlo (PMC), a novel Monte Carlo method for Bayesian inference in complex probability distributions. PMC incorporates a normalizing flow (NF) and an adaptive Sequential Monte Carlo (SMC) scheme, along with a novel past resampling scheme to boost the number of propagated particles without extra computational costs. Additionally, we utilize preconditioned Crank–Nicolson updates, enabling PMC to scale to higher dimensions without the gradient of target distribution. The efficacy of PMC in producing samples, estimating model evidence, and executing robust inference is showcased through two challenging case studies, highlighting its superior performance compared to conventional methods.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Nested Sampling—The Idea
by
John Skilling
Phys. Sci. Forum 2023, 9(1), 22; https://doi.org/10.3390/psf2023009022 - 08 Jan 2024
Abstract
We seek to add up over unit volume in arbitrary dimension. Nested sampling locates the bulk of Q by geometrical compression, using a Monte Carlo ensemble constrained within a progressively more restrictive lower limit
[...] Read more.
We seek to add up over unit volume in arbitrary dimension. Nested sampling locates the bulk of Q by geometrical compression, using a Monte Carlo ensemble constrained within a progressively more restrictive lower limit . This domain is divided into a core and a shell , with the core kept adequately populated.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Flow Annealed Kalman Inversion for Gradient-Free Inference in Bayesian Inverse Problems
by
Richard D. P. Grumitt, Minas Karamanis and Uroš Seljak
Phys. Sci. Forum 2023, 9(1), 21; https://doi.org/10.3390/psf2023009021 - 04 Jan 2024
Abstract
For many scientific inverse problems, we are required to evaluate an expensive forward model. Moreover, the model is often given in such a form that it is unrealistic to access its gradients. In such a scenario, standard Markov Chain Monte Carlo algorithms quickly
[...] Read more.
For many scientific inverse problems, we are required to evaluate an expensive forward model. Moreover, the model is often given in such a form that it is unrealistic to access its gradients. In such a scenario, standard Markov Chain Monte Carlo algorithms quickly become impractical, requiring a large number of serial model evaluations to converge on the target distribution. In this paper, we introduce Flow Annealed Kalman Inversion (FAKI). This is a generalization of Ensemble Kalman Inversion (EKI) where we embed the Kalman filter updates in a temperature annealing scheme and use normalizing flows (NFs) to map the intermediate measures corresponding to each temperature level to the standard Gaussian. Thus, we relax the Gaussian ansatz for the intermediate measures used in standard EKI, allowing us to achieve higher-fidelity approximations to non-Gaussian targets. We demonstrate the performance of FAKI on two numerical benchmarks, showing dramatic improvements over standard EKI in terms of accuracy whilst accelerating its already rapid convergence properties (typically in steps).
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Knowledge-Based Image Analysis: Bayesian Evidences Enable the Comparison of Different Image Segmentation Pipelines
by
Mats Leif Moskopp, Andreas Deussen and Peter Dieterich
Phys. Sci. Forum 2023, 9(1), 20; https://doi.org/10.3390/psf2023009020 - 04 Jan 2024
Abstract
The analysis and evaluation of microscopic image data is essential in life sciences. Increasing temporal and spatial digital image resolution and the size of data sets promotes the necessity of automated image analysis. Previously, our group proposed a Bayesian formalism that allows for
[...] Read more.
The analysis and evaluation of microscopic image data is essential in life sciences. Increasing temporal and spatial digital image resolution and the size of data sets promotes the necessity of automated image analysis. Previously, our group proposed a Bayesian formalism that allows for converting the experimenter’s knowledge, in the form of a manually segmented image, into machine-readable probability distributions of the parameters of an image segmentation pipeline. This approach preserved the level of detail provided by expert knowledge and interobserver variability and has proven robust to a variety of recording qualities and imaging artifacts. In the present work, Bayesian evidences were used to compare different image processing pipelines. As an illustrative example, a microscopic phase contrast image of a wound healing assay and its manual segmentation by the experimenter (ground truth) are used. Six different variations of image segmentation pipelines are introduced. The aim was to find the image segmentation pipeline that is best to automatically segment the input image given the expert knowledge with respect to the principle of Occam’s razor to avoid unnecessary complexity and computation. While none of the introduced image segmentation pipelines fail completely, it is illustrated that assessing the quality of the image segmentation with the naked eye is not feasible. Bayesian evidence (and the intrinsically estimated uncertainty of the image segmentation) is used to choose the best image processing pipeline for the given image. This work illustrates a proof of principle and is extendable to a diverse range of image segmentation problems.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Inferring Evidence from Nested Sampling Data via Information Field Theory
by
Margret Westerkamp, Jakob Roth, Philipp Frank, Will Handley and Torsten Enßlin
Phys. Sci. Forum 2023, 9(1), 19; https://doi.org/10.3390/psf2023009019 - 13 Dec 2023
Abstract
Nested sampling provides an estimate of the evidence of a Bayesian inference problem via probing the likelihood as a function of the enclosed prior volume. However, the lack of precise values of the enclosed prior mass of the samples introduces probing noise, which
[...] Read more.
Nested sampling provides an estimate of the evidence of a Bayesian inference problem via probing the likelihood as a function of the enclosed prior volume. However, the lack of precise values of the enclosed prior mass of the samples introduces probing noise, which can hamper high-accuracy determinations of the evidence values as estimated from the likelihood-prior-volume function. We introduce an approach based on information field theory, a framework for non-parametric function reconstruction from data, that infers the likelihood-prior-volume function by exploiting its smoothness and thereby aims to improve the evidence calculation. Our method provides posterior samples of the likelihood-prior-volume function that translate into a quantification of the remaining sampling noise for the evidence estimate, or for any other quantity derived from the likelihood-prior-volume function.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
A BRAIN Study to Tackle Image Analysis with Artificial Intelligence in the ALMA 2030 Era
by
Fabrizia Guglielmetti, Michele Delli Veneri, Ivano Baronchelli, Carmen Blanco, Andrea Dosi, Torsten Enßlin, Vishal Johnson, Giuseppe Longo, Jakob Roth, Felix Stoehr, Łukasz Tychoniec and Eric Villard
Phys. Sci. Forum 2023, 9(1), 18; https://doi.org/10.3390/psf2023009018 - 13 Dec 2023
Abstract
An ESO internal ALMA development study, BRAIN, is addressing the ill-posed inverse problem of synthesis image analysis, employing astrostatistics and astroinformatics. These emerging fields of research offer interdisciplinary approaches at the intersection of observational astronomy, statistics, algorithm development, and data science. In this
[...] Read more.
An ESO internal ALMA development study, BRAIN, is addressing the ill-posed inverse problem of synthesis image analysis, employing astrostatistics and astroinformatics. These emerging fields of research offer interdisciplinary approaches at the intersection of observational astronomy, statistics, algorithm development, and data science. In this study, we provide evidence of the benefits of employing these approaches to ALMA imaging for operational and scientific purposes. We show the potential of two techniques, RESOLVE and DeepFocus, applied to ALMA-calibrated science data. Significant advantages are provided with the prospect to improve the quality and completeness of the data products stored in the science archive and the overall processing time for operations. Both approaches evidence the logical pathway to address the incoming revolution in data rates dictated by the planned electronic upgrades. Moreover, we bring to the community additional products through a new package, ALMASim, to promote advancements in these fields, providing a refined ALMA simulator usable by a large community for training and testing new algorithms.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Snowballing Nested Sampling
by
Johannes Buchner
Phys. Sci. Forum 2023, 9(1), 17; https://doi.org/10.3390/psf2023009017 - 06 Dec 2023
Abstract
A new way to run nested sampling, combined with realistic MCMC proposals to generate new live points, is presented. Nested sampling is run with a fixed number of MCMC steps. Subsequently, snowballing nested sampling extends the run to more and more live points.
[...] Read more.
A new way to run nested sampling, combined with realistic MCMC proposals to generate new live points, is presented. Nested sampling is run with a fixed number of MCMC steps. Subsequently, snowballing nested sampling extends the run to more and more live points. This stabilizes the MCMC proposal of later MCMC proposals, and leads to pleasant properties, including that the number of live points and number of MCMC steps do not have to be calibrated, that the evidence and posterior approximation improve as more compute is added and can be diagnosed with convergence diagnostics from the MCMC community. Snowballing nested sampling converges to a “perfect” nested sampling run with an infinite number of MCMC steps.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Quantum Measurement and Objective Classical Reality
by
Vishal Johnson, Philipp Frank and Torsten Enßlin
Phys. Sci. Forum 2023, 9(1), 16; https://doi.org/10.3390/psf2023009016 - 06 Dec 2023
Abstract
We explore quantum measurement in the context of Everettian unitary quantum mechanics and construct an explicit unitary measurement procedure. We propose the existence of prior correlated states that enable this procedure to work and therefore argue that correlation is a resource that is
[...] Read more.
We explore quantum measurement in the context of Everettian unitary quantum mechanics and construct an explicit unitary measurement procedure. We propose the existence of prior correlated states that enable this procedure to work and therefore argue that correlation is a resource that is consumed when measurements take place. It is also argued that a network of such measurements establishes a stable objective classical reality.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Three-Dimensional Visualization of Astronomy Data Using Virtual Reality
by
Gilles Ferrand
Phys. Sci. Forum 2023, 8(1), 71; https://doi.org/10.3390/psf2023008071 - 05 Dec 2023
Abstract
►▼
Show Figures
Visualization is an essential part of research, both to explore one’s data and to communicate one’s findings with others. Many data products in astronomy come in the form of multi-dimensional cubes, and since our brains are tuned for recognition in a 3D world,
[...] Read more.
Visualization is an essential part of research, both to explore one’s data and to communicate one’s findings with others. Many data products in astronomy come in the form of multi-dimensional cubes, and since our brains are tuned for recognition in a 3D world, we ought to display and manipulate these in 3D space. This is possible with virtual reality (VR) devices. Drawing from our experiments developing immersive and interactive 3D experiences from actual science data at the Astrophysical Big Bang Laboratory (ABBL), this paper gives an overview of the opportunities and challenges that are awaiting astrophysicists in the burgeoning VR space. It covers both software and hardware matters, as well as practical aspects for successful delivery to the public.
Full article
Figure 1
Open AccessProceeding Paper
Searches for Dark Matter in the Galactic Halo and Extragalactic Sources with IceCube
by
Minjin Jeong
Phys. Sci. Forum 2023, 8(1), 70; https://doi.org/10.3390/psf2023008070 - 05 Dec 2023
Abstract
►▼
Show Figures
Although there is overwhelming evidence for the existence of dark matter, the nature of dark matter remains largely unknown. Neutrino telescopes are powerful tools to search indirectly for dark matter, through the detection of neutrinos produced during dark matter decay or annihilation processes.
[...] Read more.
Although there is overwhelming evidence for the existence of dark matter, the nature of dark matter remains largely unknown. Neutrino telescopes are powerful tools to search indirectly for dark matter, through the detection of neutrinos produced during dark matter decay or annihilation processes. The IceCube Neutrino Observatory is a cubic-kilometer-scale neutrino telescope located under 1.5 km of ice near the Amundsen-Scott South Pole Station. Various dark matter searches were performed with IceCube over the last decade, providing strong constraints on dark matter models. In this contribution, we present the latest results from IceCube as well as ongoing analyses using IceCube data, focusing on the works that look at the Galactic Halo, nearby galaxies, and galaxy clusters.
Full article
Figure 1
Open AccessProceeding Paper
Physics-Consistency Condition for Infinite Neural Networks and Experimental Characterization
by
Sascha Ranftl and Shaoheng Guan
Phys. Sci. Forum 2023, 9(1), 15; https://doi.org/10.3390/psf2023009015 - 04 Dec 2023
Abstract
It has previously been shown that prior physics knowledge can be incorporated into the structure of an artificial neural network via neural activation functions based on (i) the correspondence under the infinite-width limit between neural networks and Gaussian processes if the central limit
[...] Read more.
It has previously been shown that prior physics knowledge can be incorporated into the structure of an artificial neural network via neural activation functions based on (i) the correspondence under the infinite-width limit between neural networks and Gaussian processes if the central limit theorem holds and (ii) the construction of physics-consistent Gaussian process kernels, i.e., specialized covariance functions that ensure that the Gaussian process fulfills a priori some linear (differential) equation. Such regression models can be useful in many-query problems, e.g., inverse problems, uncertainty quantification or optimization, when a single forward solution or likelihood evaluation is costly. Based on a small set of training data, the learned model or “surrogate” can then be used as a fast approximator. The bottleneck is then for the surrogate to also learn efficiently and effectively from small data sets while at the same time ensuring physically consistent predictions. Based on this, we will further explore the properties of so-constructed neural networks. In particular, we will characterize (i) generalization behavior and (ii) the approximation quality or Gaussianity as a function of network width and discuss (iii) extensions from shallow to deep NNs.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Bayesian Inference and Deep Learning for Inverse Problems
by
Ali Mohammad-Djafari, Ning Chu, Li Wang and Liang Yu
Phys. Sci. Forum 2023, 9(1), 14; https://doi.org/10.3390/psf2023009014 - 01 Dec 2023
Abstract
Inverse problems arise anywhere we have an indirect measurement. In general, they are ill-posed to obtain satisfactory solutions, which needs prior knowledge. Classically, different regularization methods and Bayesian inference-based methods have been proposed. As these methods need a great number of forward and
[...] Read more.
Inverse problems arise anywhere we have an indirect measurement. In general, they are ill-posed to obtain satisfactory solutions, which needs prior knowledge. Classically, different regularization methods and Bayesian inference-based methods have been proposed. As these methods need a great number of forward and backward computations, they become costly in computation, particularly when the forward or generative models are complex, and the evaluation of the likelihood becomes very costly. Using deep neural network surrogate models and approximate computation can become very helpful. However, in accounting for the uncertainties, we need first to understand Bayesian deep learning, and then we can see how we can use it for inverse problems. In this work, we focus on NN, DL, and, more specifically, the Bayesian DL particularly adapted for inverse problems. We first give details of Bayesian DL approximate computations with exponential families; then, we see how we can use them for inverse problems. We consider two cases: First, we consider the case where the forward operator is known and used as a physics constraint, and the second examines more general data-driven DL methods.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Proximal Nested Sampling with Data-Driven Priors for Physical Scientists
by
Jason D. McEwen, Tobías I. Liaudat, Matthew A. Price, Xiaohao Cai and Marcelo Pereyra
Phys. Sci. Forum 2023, 9(1), 13; https://doi.org/10.3390/psf2023009013 - 01 Dec 2023
Abstract
Proximal nested sampling was introduced recently to open up Bayesian model selection for high-dimensional problems such as computational imaging. The framework is suitable for models with a log-convex likelihood, which are ubiquitous in the imaging sciences. The purpose of this article is two-fold.
[...] Read more.
Proximal nested sampling was introduced recently to open up Bayesian model selection for high-dimensional problems such as computational imaging. The framework is suitable for models with a log-convex likelihood, which are ubiquitous in the imaging sciences. The purpose of this article is two-fold. First, we review proximal nested sampling in a pedagogical manner in an attempt to elucidate the framework for physical scientists. Second, we show how proximal nested sampling can be extended in an empirical Bayes setting to support data-driven priors, such as deep neural networks learned from training data.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Variational Bayesian Approximation (VBA) with Exponential Families and Covariance Estimation
by
Seyedeh Azadeh Fallah Mortezanejad and Ali Mohammad-Djafari
Phys. Sci. Forum 2023, 9(1), 12; https://doi.org/10.3390/psf2023009012 - 30 Nov 2023
Abstract
Variational Bayesian Approximation (VBA) is a fast technique for approximating Bayesian computation. The main idea is to assess the joint posterior distribution of all the unknown variables with a simple expression. Mean–Field Variational Bayesian Approximation (MFVBA) is a particular case developed for large–scale
[...] Read more.
Variational Bayesian Approximation (VBA) is a fast technique for approximating Bayesian computation. The main idea is to assess the joint posterior distribution of all the unknown variables with a simple expression. Mean–Field Variational Bayesian Approximation (MFVBA) is a particular case developed for large–scale problems where the approximated probability law is separable in all variables. A well–known drawback of MFVBA is that it tends to underestimate the variances in the variables, even though it estimates the means well. It can lead to poor inference results. We can obtain a fixed point algorithm to evaluate the means in exponential families for the approximating distribution. However, this does not solve the problem of underestimating the variances. In this paper, we propose a modified method of VBA with exponential families to first estimate the posterior mean and then improve the estimation of the posterior covariance. We demonstrate the performance of the procedure with an example.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Quantification of Endothelial Cell Migration Dynamics Using Bayesian Data Analysis
by
Anselm Hohlstamm, Andreas Deussen, Stephan Speier and Peter Dieterich
Phys. Sci. Forum 2023, 9(1), 11; https://doi.org/10.3390/psf2023009011 - 30 Nov 2023
Abstract
Endothelial cells keep a tight and adaptive inner cell layer in blood vessels. Thereby, the cells develop complex dynamics through integrating active individual and collective cell migration, cell-cell interactions as well as interactions with external stimuli. It is the aim of this study
[...] Read more.
Endothelial cells keep a tight and adaptive inner cell layer in blood vessels. Thereby, the cells develop complex dynamics through integrating active individual and collective cell migration, cell-cell interactions as well as interactions with external stimuli. It is the aim of this study to quantify and model these underlying dynamics. Therefore, we seeded and stained human umbilical vein endothelial cells (HUVECs) and recorded their positions every 10 min for 48 h via live-cell imaging. After image segmentation and tracking of several 10.000 cells, we applied Bayesian data analysis to models assessing the experimentally obtained cell trajectories. By analyzing the mean squared velocities, we found a dependence on the local cell density. Based on this connection, we developed a model, which approximates the time-dependent frequency of cell divisions. Furthermore, we determined two different phases of velocity deceleration, which are influenced by the emergence of correlated cell movements and time-dependent aging in this non-stationary system. By integrating the findings of correlation functions, we will be able to develop a comprehensive model to improve the understanding of endothelial cell migration in the future.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
Learned Harmonic Mean Estimation of the Marginal Likelihood with Normalizing Flows
by
Alicja Polanska, Matthew A. Price, Alessio Spurio Mancini and Jason D. McEwen
Phys. Sci. Forum 2023, 9(1), 10; https://doi.org/10.3390/psf2023009010 - 29 Nov 2023
Cited by 1
Abstract
Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of
[...] Read more.
Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work, a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article, we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e., by lowering its “temperature”, ensuring that its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimization problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1
Open AccessProceeding Paper
ESS Neutrino Super Beam ESSνSB Design and Performance for Precision Measurements of the Leptonic CP Violating Phase δCP
by
Tord Ekelöf
Phys. Sci. Forum 2023, 8(1), 69; https://doi.org/10.3390/psf2023008069 - 28 Nov 2023
Abstract
►▼
Show Figures
A design study ESSνSB was carried out during the years 2018–2021 concerning how the five MW linear proton accelerators of the European Spallation Source, which are currently under construction in Lund, Sweden, can be used to generate a world-unique, intense neutrino Super Beam
[...] Read more.
A design study ESSνSB was carried out during the years 2018–2021 concerning how the five MW linear proton accelerators of the European Spallation Source, which are currently under construction in Lund, Sweden, can be used to generate a world-unique, intense neutrino Super Beam for precision measurements of the leptonic CP violating phase δCP. As there are definite limits, which are related to uncertainties in neutrino–nucleus interaction modeling, to how far the systematic errors in such measurements can be reduced, the method chosen in this project is to make the measurements at the second oscillation maximum, where the CP violation signal is close to three times larger than at the first, whereas the systematic errors are approximately the same at the two maxima. As the second maximum is located three times further away from the neutrino source than the first maximum, a higher neutrino beam intensity and thus a higher proton driver power are required when measuring at the second maximum. The unique high power of the ESS proton linac will allow for the measurements to be made at the second maximum and thereby for the most precise measurements of the leptonic CP violation phase δCP to be made. This paper describes the results of the work made on the conceptual design of ESSνSB layout, infrastructure, and components as well as the evaluation of the physics performance for leptonic CP violation discovery and, in particular, the precision in the measurement of δCP.
Full article
Figure 1
Open AccessProceeding Paper
A Bayesian Data Analysis Method for an Experiment to Measure the Gravitational Acceleration of Antihydrogen
by
Danielle Hodgkinson, Joel Fajans and Jonathan S. Wurtele
Phys. Sci. Forum 2023, 9(1), 9; https://doi.org/10.3390/psf2023009009 - 28 Nov 2023
Abstract
The ALPHA-g experiment at CERN intends to observe the effect of gravity on antihydrogen. In ALPHA-g, antihydrogen is confined to a magnetic trap with an axis aligned parallel to the Earth’s gravitational field. An imposed difference in the magnetic field of the confining
[...] Read more.
The ALPHA-g experiment at CERN intends to observe the effect of gravity on antihydrogen. In ALPHA-g, antihydrogen is confined to a magnetic trap with an axis aligned parallel to the Earth’s gravitational field. An imposed difference in the magnetic field of the confining coils above and below the trapping region, known as a bias, can be delicately adjusted to compensate for the gravitational potential experienced by the trapped anti-atoms. With the bias maintained, the magnetic fields of the coils can be ramped down slowly compared to the anti-atom motion; this releases the antihydrogen and leads to annihilations on the walls of the apparatus, which are detected by a position-sensitive detector. If the bias cancels out the gravitational potential, antihydrogen will escape the trap upwards or downwards with equal probability. Determining the downward (or upward) escape probability, p, from observed annihilations is non-trivial because the annihilation detection efficiency may be up–down asymmetric; some small fraction of antihydrogen escaping downwards may be detected in the upper region (and vice versa) meaning that the precise number of trapped antihydrogen atoms is unknown. In addition, cosmic rays passing through the apparatus lead to a background annihilation rate, which may also be up–down asymmetric. We present a Bayesian method to determine p by assuming annihilations detected in the upper and lower regions are independently Poisson distributed, with the Poisson mean expressed in terms of experimental quantities. We solve for the posterior p using the Markov chain Monte Carlo integration package, Stan. Further, we present a method to determine the gravitational acceleration of antihydrogen, , by modifying the analysis described above to include simulation results. In the modified analysis, p is replaced by the simulated probability of downward escape, which is a function of .
Full article
(This article belongs to the Proceedings of The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures
Figure 1