entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 544 KiB  
Article
PT Symmetry, Non-Gaussian Path Integrals, and the Quantum Black–Scholes Equation
by Will Hicks
Entropy 2019, 21(2), 105; https://doi.org/10.3390/e21020105 - 23 Jan 2019
Cited by 6 | Viewed by 3898
Abstract
The Accardi–Boukas quantum Black–Scholes framework, provides a means by which one can apply the Hudson–Parthasarathy quantum stochastic calculus to problems in finance. Solutions to these equations can be modelled using nonlocal diffusion processes, via a Kramers–Moyal expansion, and this provides useful tools to [...] Read more.
The Accardi–Boukas quantum Black–Scholes framework, provides a means by which one can apply the Hudson–Parthasarathy quantum stochastic calculus to problems in finance. Solutions to these equations can be modelled using nonlocal diffusion processes, via a Kramers–Moyal expansion, and this provides useful tools to understand their behaviour. In this paper we develop further links between quantum stochastic processes, and nonlocal diffusions, by inverting the question, and showing how certain nonlocal diffusions can be written as quantum stochastic processes. We then go on to show how one can use path integral formalism, and PT symmetric quantum mechanics, to build a non-Gaussian kernel function for the Accardi–Boukas quantum Black–Scholes. Behaviours observed in the real market are a natural model output, rather than something that must be deliberately included. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Graphical abstract

20 pages, 11617 KiB  
Article
Quantifying Data Dependencies with Rényi Mutual Information and Minimum Spanning Trees
by Anne Eggels and Daan Crommelin
Entropy 2019, 21(2), 100; https://doi.org/10.3390/e21020100 - 22 Jan 2019
Cited by 3 | Viewed by 3279
Abstract
In this study, we present a novel method for quantifying dependencies in multivariate datasets, based on estimating the Rényi mutual information by minimum spanning trees (MSTs). The extent to which random variables are dependent is an important question, e.g., for uncertainty quantification and [...] Read more.
In this study, we present a novel method for quantifying dependencies in multivariate datasets, based on estimating the Rényi mutual information by minimum spanning trees (MSTs). The extent to which random variables are dependent is an important question, e.g., for uncertainty quantification and sensitivity analysis. The latter is closely related to the question how strongly dependent the output of, e.g., a computer simulation, is on the individual random input variables. To estimate the Rényi mutual information from data, we use a method due to Hero et al. that relies on computing minimum spanning trees (MSTs) of the data and uses the length of the MST in an estimator for the entropy. To reduce the computational cost of constructing the exact MST for large datasets, we explore methods to compute approximations to the exact MST, and find the multilevel approach introduced recently by Zhong et al. (2015) to be the most accurate. Because the MST computation does not require knowledge (or estimation) of the distributions, our methodology is well-suited for situations where only data are available. Furthermore, we show that, in the case where only the ranking of several dependencies is required rather than their exact value, it is not necessary to compute the Rényi divergence, but only an estimator derived from it. The main contributions of this paper are the introduction of this quantifier of dependency, as well as the novel combination of using approximate methods for MSTs with estimating the Rényi mutual information via MSTs. We applied our proposed method to an artificial test case based on the Ishigami function, as well as to a real-world test case involving an El Nino dataset. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

13 pages, 6061 KiB  
Article
Parallel Lives: A Local-Realistic Interpretation of “Nonlocal” Boxes
by Gilles Brassard and Paul Raymond-Robichaud
Entropy 2019, 21(1), 87; https://doi.org/10.3390/e21010087 - 18 Jan 2019
Cited by 19 | Viewed by 8395
Abstract
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in [...] Read more.
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in the simplest possible manner. Along the way, we reinterpret the celebrated 1935 argument of Einstein, Podolsky and Rosen, and come to the conclusion that they were right in their questioning the completeness of the Copenhagen version of quantum theory, provided one believes in a local-realistic universe. Throughout our journey, we strive to explain our views from first principles, without expecting mathematical sophistication nor specialized prior knowledge from the reader. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

19 pages, 2236 KiB  
Article
Configurational Entropy in Multicomponent Alloys: Matrix Formulation from Ab Initio Based Hamiltonian and Application to the FCC Cr-Fe-Mn-Ni System
by Antonio Fernández-Caballero, Mark Fedorov, Jan S. Wróbel, Paul M. Mummery and Duc Nguyen-Manh
Entropy 2019, 21(1), 68; https://doi.org/10.3390/e21010068 - 15 Jan 2019
Cited by 24 | Viewed by 9829
Abstract
Configuration entropy is believed to stabilize disordered solid solution phases in multicomponent systems at elevated temperatures over intermetallic compounds by lowering the Gibbs free energy. Traditionally, the increment of configuration entropy with temperature was computed by time-consuming thermodynamic integration methods. In this work, [...] Read more.
Configuration entropy is believed to stabilize disordered solid solution phases in multicomponent systems at elevated temperatures over intermetallic compounds by lowering the Gibbs free energy. Traditionally, the increment of configuration entropy with temperature was computed by time-consuming thermodynamic integration methods. In this work, a new formalism based on a hybrid combination of the Cluster Expansion (CE) Hamiltonian and Monte Carlo simulations is developed to predict the configuration entropy as a function of temperature from multi-body cluster probability in a multi-component system with arbitrary average composition. The multi-body probabilities are worked out by explicit inversion and direct product of a matrix formulation within orthonomal sets of point functions in the clusters obtained from symmetry independent correlation functions. The matrix quantities are determined from semi canonical Monte Carlo simulations with Effective Cluster Interactions (ECIs) derived from Density Functional Theory (DFT) calculations. The formalism is applied to analyze the 4-body cluster probabilities for the quaternary system Cr-Fe-Mn-Ni as a function of temperature and alloy concentration. It is shown that, for two specific compositions (Cr 25Fe 25Mn 25Ni 25 and Cr 18Fe 27Mn 27Ni 28), the high value of probabilities for Cr-Fe-Fe-Fe and Mn-Mn-Ni-Ni are strongly correlated with the presence of the ordered phases L1 2 -CrFe 3 and L1 0-MnNi, respectively. These results are in an excellent agreement with predictions of these ground state structures by ab initio calculations. The general formalism is used to investigate the configuration entropy as a function of temperature and for 285 different alloy compositions. It is found that our matrix formulation of cluster probabilities provides an efficient tool to compute configuration entropy in multi-component alloys in a comparison with the result obtained by the thermodynamic integration method. At high temperatures, it is shown that many-body cluster correlations still play an important role in understanding the configuration entropy before reaching the solid solution limit of high-entroy alloys (HEAs). Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Graphical abstract

13 pages, 1460 KiB  
Article
The Effect of Cognitive Resource Competition Due to Dual-Tasking on the Irregularity and Control of Postural Movement Components
by Thomas Haid and Peter Federolf
Entropy 2019, 21(1), 70; https://doi.org/10.3390/e21010070 - 15 Jan 2019
Cited by 13 | Viewed by 4757
Abstract
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found [...] Read more.
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found in measures of sway irregularity, we hypothesized (H1) that the irregularity of PMs and their respective control, and the control tightness will display the n-shape. Furthermore, according to the minimal intervention principle (H2) different PMs should be affected differently. Finally, (H3) we expected stronger dual-tasking effects in the older population, due to limited cognitive resources. We measured the kinematics of forty-one healthy volunteers (23 aged 26 ± 3; 18 aged 59 ± 4) performing 80 s tandem stances in five conditions (single-task and auditory n-back task; n = 1–4), and computed sample entropies on PM time-series and two novel measures of control tightness. In the PM most critical for stability, the control tightness decreased steadily, and in contrast to H3, decreased further for the younger group. Nevertheless, we found n-shapes in most variables with differing magnitudes, supporting H1 and H2. These results suggest that the control tightness might deteriorate steadily with increased cognitive load in critical movements despite the otherwise eminent n-shaped relationship. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

13 pages, 4392 KiB  
Article
Double Entropy Joint Distribution Function and Its Application in Calculation of Design Wave Height
by Guilin Liu, Baiyu Chen, Song Jiang, Hanliang Fu, Liping Wang and Wei Jiang
Entropy 2019, 21(1), 64; https://doi.org/10.3390/e21010064 - 14 Jan 2019
Cited by 34 | Viewed by 3595
Abstract
Wave height and wave period are important oceanic environmental factors that are used to describe the randomness of a wave. Within the field of ocean engineering, the calculation of design wave height is of great significance. In this paper, a periodic maximum entropy [...] Read more.
Wave height and wave period are important oceanic environmental factors that are used to describe the randomness of a wave. Within the field of ocean engineering, the calculation of design wave height is of great significance. In this paper, a periodic maximum entropy distribution function with four undetermined parameters is derived by means of coordinate transformation and solving conditional variational problems. A double entropy joint distribution function of wave height and wave period is also derived. The function is derived from the maximum entropy wave height function and the maximum entropy periodic function, with the help of structures of the Copula function. The double entropy joint distribution function of wave height and wave period is not limited by weak nonlinearity, nor by normal stochastic process and narrow spectrum. Besides, it can fit the observed data more carefully and be more widely applicable to nonlinear waves in various cases, owing to the many undetermined parameters it contains. The engineering cases show that the recurrence level derived from the double entropy joint distribution function is higher than that from the extreme value distribution using the single variables of wave height or wave period. It is also higher than that from the traditional joint distribution function of wave height and wave period. Full article
Show Figures

Graphical abstract

14 pages, 2837 KiB  
Article
Unfolding the Complexity of the Global Value Chain: Strength and Entropy in the Single-Layer, Multiplex, and Multi-Layer International Trade Networks
by Luiz G. A. Alves, Giuseppe Mangioni, Francisco A. Rodrigues, Pietro Panzarasa and Yamir Moreno
Entropy 2018, 20(12), 909; https://doi.org/10.3390/e20120909 - 28 Nov 2018
Cited by 31 | Viewed by 7047
Abstract
The worldwide trade network has been widely studied through different data sets and network representations with a view to better understanding interactions among countries and products. Here we investigate international trade through the lenses of the single-layer, multiplex, and multi-layer networks. We discuss [...] Read more.
The worldwide trade network has been widely studied through different data sets and network representations with a view to better understanding interactions among countries and products. Here we investigate international trade through the lenses of the single-layer, multiplex, and multi-layer networks. We discuss differences among the three network frameworks in terms of their relative advantages in capturing salient topological features of trade. We draw on the World Input-Output Database to build the three networks. We then uncover sources of heterogeneity in the way strength is allocated among countries and transactions by computing the strength distribution and entropy in each network. Additionally, we trace how entropy evolved, and show how the observed peaks can be associated with the onset of the global economic downturn. Findings suggest how more complex representations of trade, such as the multi-layer network, enable us to disambiguate the distinct roles of intra- and cross-industry transactions in driving the evolution of entropy at a more aggregate level. We discuss our results and the implications of our comparative analysis of networks for research on international trade and other empirical domains across the natural and social sciences. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

27 pages, 568 KiB  
Article
Entropic Steering Criteria: Applications to Bipartite and Tripartite Systems
by Ana C. S. Costa, Roope Uola and Otfried Gühne
Entropy 2018, 20(10), 763; https://doi.org/10.3390/e20100763 - 05 Oct 2018
Cited by 21 | Viewed by 4220
Abstract
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we [...] Read more.
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we investigate the applicability of a recently proposed method for building steering criteria from generalized entropic uncertainty relations. This method works for any entropy which satisfy the properties of (i) (pseudo-) additivity for independent distributions; (ii) state independent entropic uncertainty relation (EUR); and (iii) joint convexity of a corresponding relative entropy. Our study extends the former analysis to Tsallis and Rényi entropies on bipartite and tripartite systems. As examples, we investigate the steerability of the three-qubit GHZ and W states. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

43 pages, 543 KiB  
Article
Symmetry, Outer Bounds, and Code Constructions: A Computer-Aided Investigation on the Fundamental Limits of Caching
by Chao Tian
Entropy 2018, 20(8), 603; https://doi.org/10.3390/e20080603 - 13 Aug 2018
Cited by 41 | Viewed by 4361
Abstract
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space [...] Read more.
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space serves as the starting point of this approach; however, our effort goes significantly beyond using it to prove information inequalities. We first identify and formalize the symmetry structure in the problem, which enables us to show the existence of optimal symmetric solutions. A symmetry-reduced linear program is then used to identify the boundary of the memory-transmission-rate tradeoff for several small cases, for which we obtain a set of tight outer bounds. General hypotheses on the optimal tradeoff region are formed from these computed data, which are then analytically proven. This leads to a complete characterization of the optimal tradeoff for systems with only two users, and certain partial characterization for systems with only two files. Next, we show that by carefully analyzing the joint entropy structure of the outer bounds for certain cases, a novel code construction can be reverse-engineered, which eventually leads to a general class of codes. Finally, we show that outer bounds can be computed through strategically relaxing the LP in different ways, which can be used to explore the problem computationally. This allows us firstly to deduce generic characteristic of the converse proof, and secondly to compute outer bounds for larger problem cases, despite the seemingly impossible computation scale. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Show Figures

Figure 1

80 pages, 11447 KiB  
Article
Conditional Gaussian Systems for Multiscale Nonlinear Stochastic Systems: Prediction, State Estimation and Uncertainty Quantification
by Nan Chen and Andrew J. Majda
Entropy 2018, 20(7), 509; https://doi.org/10.3390/e20070509 - 04 Jul 2018
Cited by 39 | Viewed by 5489
Abstract
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed [...] Read more.
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed analytical formulae for solving the conditional statistics and is thus computationally efficient. A rich gallery of examples of conditional Gaussian systems are illustrated here, which includes data-driven physics-constrained nonlinear stochastic models, stochastically coupled reaction–diffusion models in neuroscience and ecology, and large-scale dynamical models in turbulence, fluids and geophysical flows. Making use of the conditional Gaussian structure, efficient statistically accurate algorithms involving a novel hybrid strategy for different subspaces, a judicious block decomposition and statistical symmetry are developed for solving the Fokker–Planck equation in large dimensions. The conditional Gaussian framework is also applied to develop extremely cheap multiscale data assimilation schemes, such as the stochastic superparameterization, which use particle filters to capture the non-Gaussian statistics on the large-scale part whose dimension is small whereas the statistics of the small-scale part are conditional Gaussian given the large-scale part. Other topics of the conditional Gaussian systems studied here include designing new parameter estimation schemes and understanding model errors. Full article
(This article belongs to the Special Issue Information Theory and Stochastics for Multiscale Nonlinear Systems)
Show Figures

Figure 1

54 pages, 1965 KiB  
Article
The Gibbs Paradox: Early History and Solutions
by Olivier Darrigol
Entropy 2018, 20(6), 443; https://doi.org/10.3390/e20060443 - 06 Jun 2018
Cited by 12 | Viewed by 7683
Abstract
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical [...] Read more.
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical mechanics as a reservoir of ideas for clarifying foundations and removing prejudices, and to relate the paradox to broad misunderstandings of the nature of physical theory. Full article
(This article belongs to the Special Issue Gibbs Paradox 2018)
Show Figures

Figure 1

24 pages, 377 KiB  
Article
Criterion of Existence of Power-Law Memory for Economic Processes
by Vasily E. Tarasov and Valentina V. Tarasova
Entropy 2018, 20(6), 414; https://doi.org/10.3390/e20060414 - 29 May 2018
Cited by 23 | Viewed by 3524
Abstract
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We [...] Read more.
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We also suggest the criterion of existence of PLT memory for frequency domain by using the concept of non-integer dimensions. For an economic process, for which it is known that an endogenous variable depends on an exogenous variable, the proposed criteria make it possible to identify the presence of the PLT memory. The suggested criteria are illustrated in various examples. The use of the proposed criteria allows apply the fractional calculus to construct dynamic models of economic processes. These criteria can be also used to identify the linear integro-differential operators that can be considered as fractional derivatives and integrals of non-integer orders. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
18 pages, 8649 KiB  
Article
Quantum Trajectories: Real or Surreal?
by Basil J. Hiley and Peter Van Reeth
Entropy 2018, 20(5), 353; https://doi.org/10.3390/e20050353 - 08 May 2018
Cited by 11 | Viewed by 7006
Abstract
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context [...] Read more.
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context of quantum mechanics. We show that the conclusion that the Bohm trajectories should be called “surreal” because they are at “variance with the actual observed track” of a particle is wrong as it is based on a false argument. We also present the results of a numerical investigation of a double Stern-Gerlach experiment which shows clearly the role of the spin within the Bohm formalism and discuss situations where the appearance of the quantum potential is open to direct experimental exploration. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

23 pages, 5735 KiB  
Review
Levitated Nanoparticles for Microscopic Thermodynamics—A Review
by Jan Gieseler and James Millen
Entropy 2018, 20(5), 326; https://doi.org/10.3390/e20050326 - 28 Apr 2018
Cited by 68 | Viewed by 9401
Abstract
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we [...] Read more.
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we review the application of levitated nanoparticles as a new experimental platform to explore stochastic thermodynamics in small systems. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

12 pages, 791 KiB  
Article
Password Security as a Game of Entropies
by Stefan Rass and Sandra König
Entropy 2018, 20(5), 312; https://doi.org/10.3390/e20050312 - 25 Apr 2018
Cited by 15 | Viewed by 5871
Abstract
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability [...] Read more.
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability of the password (measured by Shannon entropy), opposed to the difficulty for player 2 of guessing it (measured by min-entropy), and the cognitive efforts of player 1 tied to changing the password (measured by relative entropy, i.e., Kullback–Leibler divergence). The model and contribution are thus twofold: (i) it applies multi-objective game theory to the password security problem; and (ii) it introduces different concepts of entropy to measure the quality of a password choice process under different angles (and not a given password itself, since this cannot be quality-assessed in terms of entropy). We illustrate our approach with an example from everyday life, namely we analyze the password choices of employees. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
36 pages, 529 KiB  
Article
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
by Conor Finn and Joseph T. Lizier
Entropy 2018, 20(4), 297; https://doi.org/10.3390/e20040297 - 18 Apr 2018
Cited by 53 | Viewed by 7828
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The [...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Show Figures

Figure 1

15 pages, 1150 KiB  
Article
Polynomial-Time Algorithm for Learning Optimal BFS-Consistent Dynamic Bayesian Networks
by Margarida Sousa and Alexandra M. Carvalho
Entropy 2018, 20(4), 274; https://doi.org/10.3390/e20040274 - 12 Apr 2018
Cited by 4 | Viewed by 4467
Abstract
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that [...] Read more.
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Show Figures

Figure 1

12 pages, 809 KiB  
Article
Distance Entropy Cartography Characterises Centrality in Complex Networks
by Massimo Stella and Manlio De Domenico
Entropy 2018, 20(4), 268; https://doi.org/10.3390/e20040268 - 11 Apr 2018
Cited by 27 | Viewed by 5625
Abstract
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network [...] Read more.
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network models. By coupling distance entropy information with closeness centrality, we introduce a network cartography which allows one to reduce the degeneracy of ranking based on closeness alone. We apply this methodology to the empirical multiplex lexical network encoding the linguistic relationships known to English speaking toddlers. We show that the distance entropy cartography better predicts how children learn words compared to closeness centrality. Our results highlight the importance of distance entropy for gaining insights from distance patterns in complex networks. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Show Figures

Figure 1

23 pages, 1755 KiB  
Article
Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting
by Zahra Karevan and Johan A. K. Suykens
Entropy 2018, 20(4), 264; https://doi.org/10.3390/e20040264 - 10 Apr 2018
Cited by 12 | Viewed by 4464
Abstract
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. [...] Read more.
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. Sample entropy is based on the conditional entropy where a major concern is the number of past delays in the conditional term. In this study, we deploy a lag-specific conditional entropy to identify the informative past values. Moreover, considering the seasonality structure of data, we propose a clustering-based sample entropy to exploit the temporal information. Clustering-based sample entropy is based on the sample entropy definition while considering the clustering information of the training data and the membership of the test point to the clusters. In this study, we utilize the proposed method for transductive feature selection in black-box weather forecasting and conduct the experiments on minimum and maximum temperature prediction in Brussels for 1–6 days ahead. The results reveal that considering the local structure of the data can improve the feature selection performance. In addition, despite the large reduction in the number of features, the performance is competitive with the case of using all features. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

14 pages, 317 KiB  
Article
Leggett-Garg Inequalities for Quantum Fluctuating Work
by Harry J. D. Miller and Janet Anders
Entropy 2018, 20(3), 200; https://doi.org/10.3390/e20030200 - 16 Mar 2018
Cited by 10 | Viewed by 4948
Abstract
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a [...] Read more.
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a quantum system unitarily driven in time. It is shown that these inequalities can be violated in a driven two-level system, thereby demonstrating that there exists no general macrorealistic description of quantum work. These violations are shown to emerge within the standard Two-Projective-Measurement scheme as well as for alternative definitions of fluctuating work that are based on weak measurement. Our results elucidate the influences of temporal correlations on work extraction in the quantum regime and highlight a key difference between quantum and classical thermodynamics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Show Figures

Figure 1

22 pages, 708 KiB  
Article
Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
by Jun Kitazono, Ryota Kanai and Masafumi Oizumi
Entropy 2018, 20(3), 173; https://doi.org/10.3390/e20030173 - 06 Mar 2018
Cited by 25 | Viewed by 8522
Abstract
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. [...] Read more.
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

26 pages, 2338 KiB  
Article
A Variational Formulation of Nonequilibrium Thermodynamics for Discrete Open Systems with Mass and Heat Transfer
by François Gay-Balmaz and Hiroaki Yoshimura
Entropy 2018, 20(3), 163; https://doi.org/10.3390/e20030163 - 04 Mar 2018
Cited by 25 | Viewed by 5364
Abstract
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent [...] Read more.
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent Lagrangian. For discrete open systems, the time-dependent nonlinear constraint is associated with the rate of internal entropy production of the system. We show that this constraint on the solution curve systematically yields a constraint on the variations to be used in the action functional. The proposed variational formulation is intrinsic and provides the same structure for a wide class of discrete open systems. We illustrate our theory by presenting examples of open systems experiencing mechanical interactions, as well as internal diffusion, internal heat transfer, and their cross-effects. Our approach yields a systematic way to derive the complete evolution equations for the open systems, including the expression of the internal entropy production of the system, independently on its complexity. It might be especially useful for the study of the nonequilibrium thermodynamics of biophysical systems. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Show Figures

Figure 1

28 pages, 1430 KiB  
Article
Minimising the Kullback–Leibler Divergence for Model Selection in Distributed Nonlinear Systems
by Oliver M. Cliff, Mikhail Prokopenko and Robert Fitch
Entropy 2018, 20(2), 51; https://doi.org/10.3390/e20020051 - 23 Jan 2018
Cited by 22 | Viewed by 7807
Abstract
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of [...] Read more.
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems. Full article
(This article belongs to the Special Issue New Trends in Statistical Physics of Complex Systems)
Show Figures

Figure 1

15 pages, 464 KiB  
Article
Low Computational Cost for Sample Entropy
by George Manis, Md Aktaruzzaman and Roberto Sassi
Entropy 2018, 20(1), 61; https://doi.org/10.3390/e20010061 - 13 Jan 2018
Cited by 47 | Viewed by 6125
Abstract
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used [...] Read more.
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy. Full article
Show Figures

Figure 1

24 pages, 4193 KiB  
Article
Transfer Entropy as a Tool for Hydrodynamic Model Validation
by Alicia Sendrowski, Kazi Sadid, Ehab Meselhe, Wayne Wagner, David Mohrig and Paola Passalacqua
Entropy 2018, 20(1), 58; https://doi.org/10.3390/e20010058 - 12 Jan 2018
Cited by 17 | Viewed by 5554
Abstract
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, [...] Read more.
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, and solute transport through the deltaic network. We propose using transfer entropy (TE) to validate model results. TE quantifies the information transferred between variables in terms of strength, timescale, and direction. Using water level data collected in the distributary channels and inter-channel islands of Wax Lake Delta, Louisiana, USA, along with modeled water level data generated for the same locations using Delft3D, we assess how well couplings between external drivers (river discharge, tides, wind) and modeled water levels reproduce the observed data couplings. We perform this operation through time using ten-day windows. Modeled and observed couplings compare well; their differences reflect the spatial parameterization of wind and roughness in the model, which prevents the model from capturing high frequency fluctuations of water level. The model captures couplings better in channels than on islands, suggesting that mechanisms of channel-island connectivity are not fully represented in the model. Overall, TE serves as an additional validation tool to quantify the couplings of the system of interest at multiple spatial and temporal scales. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Show Figures

Figure 1

16 pages, 5464 KiB  
Article
Information Entropy Suggests Stronger Nonlinear Associations between Hydro-Meteorological Variables and ENSO
by Tue M. Vu, Ashok K. Mishra and Goutam Konapala
Entropy 2018, 20(1), 38; https://doi.org/10.3390/e20010038 - 09 Jan 2018
Cited by 16 | Viewed by 6058
Abstract
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to [...] Read more.
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to quantify the linear and non-linear relationship between annual streamflow, extreme precipitation indices over Mekong river basin, and ENSO. We primarily used Pearson correlation as a linear association metric for comparison with mutual information. The analysis was performed at four hydro-meteorological stations located on the mainstream Mekong river basin. It was observed that the nonlinear correlation information is comparatively higher between the large-scale climate index and local hydro-meteorology data in comparison to the traditional linear correlation information. The spatial analysis was carried out using all the grid points in the river basin, which suggests a spatial dependence structure between precipitation extremes and ENSO. Overall, this study suggests that mutual information approach can further detect more meaningful connections between large-scale climate indices and hydro-meteorological variables at different spatio-temporal scales. Application of nonlinear mutual information metric can be an efficient tool to better understand hydro-climatic variables dynamics resulting in improved climate-informed adaptation strategies. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Show Figures

Figure 1

20 pages, 2652 KiB  
Article
Searching for Chaos Evidence in Eye Movement Signals
by Katarzyna Harezlak and Pawel Kasprowski
Entropy 2018, 20(1), 32; https://doi.org/10.3390/e20010032 - 07 Jan 2018
Cited by 24 | Viewed by 4913
Abstract
Most naturally-occurring physical phenomena are examples of nonlinear dynamic systems, the functioning of which attracts many researchers seeking to unveil their nature. The research presented in this paper is aimed at exploring eye movement dynamic features in terms of the existence of chaotic [...] Read more.
Most naturally-occurring physical phenomena are examples of nonlinear dynamic systems, the functioning of which attracts many researchers seeking to unveil their nature. The research presented in this paper is aimed at exploring eye movement dynamic features in terms of the existence of chaotic nature. Nonlinear time series analysis methods were used for this purpose. Two time series features were studied: fractal dimension and entropy, by utilising the embedding theory. The methods were applied to the data collected during the experiment with “jumping point” stimulus. Eye movements were registered by means of the Jazz-novo eye tracker. One thousand three hundred and ninety two (1392) time series were defined, based on the horizontal velocity of eye movements registered during imposed, prolonged fixations. In order to conduct detailed analysis of the signal and identify differences contributing to the observed patterns of behaviour in time scale, fractal dimension and entropy were evaluated in various time series intervals. The influence of the noise contained in the data and the impact of the utilized filter on the obtained results were also studied. The low pass filter was used for the purpose of noise reduction with a 50 Hz cut-off frequency, estimated by means of the Fourier transform and all concerned methods were applied to time series before and after noise reduction. These studies provided some premises, which allow perceiving eye movements as observed chaotic data: characteristic of a space-time separation plot, low and non-integer time series dimension, and the time series entropy characteristic for chaotic systems. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Show Figures

Figure 1

8 pages, 245 KiB  
Article
Exact Renormalization Groups As a Form of Entropic Dynamics
by Pedro Pessoa and Ariel Caticha
Entropy 2018, 20(1), 25; https://doi.org/10.3390/e20010025 - 04 Jan 2018
Cited by 13 | Viewed by 4641
Abstract
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is [...] Read more.
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is what explains their success—is that they allow a systematic search for those degrees of freedom that happen to be relevant to the phenomena in question. In the standard approaches the RG transformations are implemented by either coarse graining or through a change of variables. When these transformations are infinitesimal, the formalism can be described as a continuous dynamical flow in a fictitious time parameter. It is generally the case that these exact RG equations are functional diffusion equations. In this paper we show that the exact RG equations can be derived using entropic methods. The RG flow is then described as a form of entropic dynamics of field configurations. Although equivalent to other versions of the RG, in this approach the RG transformations receive a purely inferential interpretation that establishes a clear link to information theory. Full article
29 pages, 1005 KiB  
Review
Information Theoretic Approaches for Motor-Imagery BCI Systems: Review and Experimental Comparison
by Rubén Martín-Clemente, Javier Olias, Deepa Beeta Thiyam, Andrzej Cichocki and Sergio Cruces
Entropy 2018, 20(1), 7; https://doi.org/10.3390/e20010007 - 02 Jan 2018
Cited by 25 | Viewed by 6427
Abstract
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a [...] Read more.
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a heuristic viewpoint, it can be also built on very strong foundations using information theory. This paper reviews the relationship between CSP and several information-theoretic approaches, including the Kullback–Leibler divergence, the Beta divergence and the Alpha-Beta log-det (AB-LD)divergence. We also revise other approaches based on the idea of selecting those features that are maximally informative about the class labels. The performance of all the methods will be also compared via experiments. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Show Figures

Figure 1

709 KiB  
Article
Multiscale Information Theory and the Marginal Utility of Information
by Benjamin Allen, Blake C. Stacey and Yaneer Bar-Yam
Entropy 2017, 19(6), 273; https://doi.org/10.3390/e19060273 - 13 Jun 2017
Cited by 25 | Viewed by 12169
Abstract
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior [...] Read more.
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior among system components results in overlapping or shared information. A system’s structure is revealed in the sharing of information across the system’s dependencies, each of which has an associated scale. Counting information according to its scale yields the quantity of scale-weighted information, which is conserved when a system is reorganized. In the interest of flexibility we allow information to be quantified using any function that satisfies two basic axioms. Shannon information and vector space dimension are examples. We discuss two quantitative indices that summarize system structure: an existing index, the complexity profile, and a new index, the marginal utility of information. Using simple examples, we show how these indices capture the multiscale structure of complex systems in a quantitative way. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

Back to TopTop