Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
On Using the BMCSL Equation of State to Renormalize the Onsager Theory Approach to Modeling Hard Prolate Spheroidal Liquid Crystal Mixtures
Entropy 2021, 23(7), 846; https://doi.org/10.3390/e23070846 - 30 Jun 2021
Cited by 2 | Viewed by 1635
Abstract
Modifications to the traditional Onsager theory for modeling isotropic–nematic phase transitions in hard prolate spheroidal systems are presented. Pure component systems are used to identify the need to update the Lee–Parsons resummation term. The Lee–Parsons resummation term uses the Carnahan–Starling equation of state [...] Read more.
Modifications to the traditional Onsager theory for modeling isotropic–nematic phase transitions in hard prolate spheroidal systems are presented. Pure component systems are used to identify the need to update the Lee–Parsons resummation term. The Lee–Parsons resummation term uses the Carnahan–Starling equation of state to approximate higher-order virial coefficients beyond the second virial coefficient employed in Onsager’s original theoretical approach. As more exact ways of calculating the excluded volume of two hard prolate spheroids of a given orientation are used, the division of the excluded volume by eight, which is an empirical correction used in the original Lee–Parsons resummation term, must be replaced by six to yield a better match between the theoretical and simulation results. These modifications are also extended to binary mixtures of hard prolate spheroids using the Boublík–Mansoori–Carnahan–Starling–Leland (BMCSL) equation of state. Full article
(This article belongs to the Special Issue Entropic Control of Soft Materials)
Show Figures

Figure 1

Article
Ground-State Properties and Phase Separation of Binary Mixtures in Mesoscopic Ring Lattices
Entropy 2021, 23(7), 821; https://doi.org/10.3390/e23070821 - 28 Jun 2021
Viewed by 1970
Abstract
We investigated the spatial phase separation of the two components forming a bosonic mixture distributed in a four-well lattice with a ring geometry. We studied the ground state of this system, described by means of a binary Bose–Hubbard Hamiltonian, by implementing a well-known [...] Read more.
We investigated the spatial phase separation of the two components forming a bosonic mixture distributed in a four-well lattice with a ring geometry. We studied the ground state of this system, described by means of a binary Bose–Hubbard Hamiltonian, by implementing a well-known coherent-state picture which allowed us to find the semi-classical equations determining the distribution of boson components in the ring lattice. Their fully analytic solutions, in the limit of large boson numbers, provide the boson populations at each well as a function of the interspecies interaction and of other significant model parameters, while allowing to reconstruct the non-trivial architecture of the ground-state four-well phase diagram. The comparison with the L-well (L=2,3) phase diagrams highlights how increasing the number of wells considerably modifies the phase diagram structure and the transition mechanism from the full-mixing to the full-demixing phase controlled by the interspecies interaction. Despite the fact that the phase diagrams for L=2,3,4 share various general properties, we show that, unlike attractive binary mixtures, repulsive mixtures do not feature a transition mechanism which can be extended to an arbitrary lattice of size L. Full article
(This article belongs to the Special Issue The Ubiquity of Entropy II)
Show Figures

Figure 1

Article
Data-Driven Analysis of Nonlinear Heterogeneous Reactions through Sparse Modeling and Bayesian Statistical Approaches
Entropy 2021, 23(7), 824; https://doi.org/10.3390/e23070824 - 28 Jun 2021
Cited by 2 | Viewed by 2335
Abstract
Heterogeneous reactions are chemical reactions that occur at the interfaces of multiple phases, and often show a nonlinear dynamical behavior due to the effect of the time-variant surface area with complex reaction mechanisms. It is important to specify the kinetics of heterogeneous reactions [...] Read more.
Heterogeneous reactions are chemical reactions that occur at the interfaces of multiple phases, and often show a nonlinear dynamical behavior due to the effect of the time-variant surface area with complex reaction mechanisms. It is important to specify the kinetics of heterogeneous reactions in order to elucidate the microscopic elementary processes and predict the macroscopic future evolution of the system. In this study, we propose a data-driven method based on a sparse modeling algorithm and sequential Monte Carlo algorithm for simultaneously extracting substantial reaction terms and surface models from a number of candidates by using partial observation data. We introduce a sparse modeling approach with non-uniform sparsity levels in order to accurately estimate rate constants, and the sequential Monte Carlo algorithm is employed to estimate time courses of multi-dimensional hidden variables. The results estimated using the proposed method show that the rate constants of dissolution and precipitation reactions that are typical examples of surface heterogeneous reactions, necessary surface models, and reaction terms underlying observable data were successfully estimated from only observable temporal changes in the concentration of the dissolved intermediate products. Full article
Show Figures

Figure 1

Article
A Semi-Deterministic Random Walk with Resetting
Entropy 2021, 23(7), 825; https://doi.org/10.3390/e23070825 - 28 Jun 2021
Cited by 4 | Viewed by 1527
Abstract
We consider a discrete-time random walk (xt) which, at random times, is reset to the starting position and performs a deterministic motion between them. We show that the quantity [...] Read more.
We consider a discrete-time random walk (xt) which, at random times, is reset to the starting position and performs a deterministic motion between them. We show that the quantity Prxt+1=n+1|xt=n,n determines if the system is averse, neutral or inclined towards resetting. It also classifies the stationary distribution. Double barrier probabilities, first passage times and the distribution of the escape time from intervals are determined. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Graphical abstract

Article
Josephson Currents and Gap Enhancement in Graph Arrays of Superconductive Islands
Entropy 2021, 23(7), 811; https://doi.org/10.3390/e23070811 - 25 Jun 2021
Cited by 4 | Viewed by 1274
Abstract
Evidence is reported that topological effects in graph-shaped arrays of superconducting islands can condition superconducting energy gap and transition temperature. The carriers giving rise to the new phase are couples of electrons (Cooper pairs) which, in the superconducting state, behave as predicted for [...] Read more.
Evidence is reported that topological effects in graph-shaped arrays of superconducting islands can condition superconducting energy gap and transition temperature. The carriers giving rise to the new phase are couples of electrons (Cooper pairs) which, in the superconducting state, behave as predicted for bosons in our structures. The presented results have been obtained both on star and double comb-shaped arrays and the coupling between the islands is provided by Josephson junctions whose potential can be tuned by external magnetic field or temperature. Our peculiar technique for probing distribution on the islands is such that the hopping of bosons between the different islands occurs because their thermal energy is of the same order of the Josephson coupling energy between the islands. Both for star and double comb graph topologies the results are in qualitative and quantitative agreement with theoretical predictions. Full article
(This article belongs to the Special Issue Thermodynamics and Superconducting Devices)
Show Figures

Figure 1

Article
The Carnot Cycle, Reversibility and Entropy
Entropy 2021, 23(7), 810; https://doi.org/10.3390/e23070810 - 25 Jun 2021
Cited by 2 | Viewed by 2387
Abstract
The Carnot cycle and the attendant notions of reversibility and entropy are examined. It is shown how the modern view of these concepts still corresponds to the ideas Clausius laid down in the nineteenth century. As such, they reflect the outmoded idea, current [...] Read more.
The Carnot cycle and the attendant notions of reversibility and entropy are examined. It is shown how the modern view of these concepts still corresponds to the ideas Clausius laid down in the nineteenth century. As such, they reflect the outmoded idea, current at the time, that heat is motion. It is shown how this view of heat led Clausius to develop the entropy of a body based on the work that could be performed in a reversible process rather than the work that is actually performed in an irreversible process. In consequence, Clausius built into entropy a conflict with energy conservation, which is concerned with actual changes in energy. In this paper, reversibility and irreversibility are investigated by means of a macroscopic formulation of internal mechanisms of damping based on rate equations for the distribution of energy within a gas. It is shown that work processes involving a step change in external pressure, however small, are intrinsically irreversible. However, under idealised conditions of zero damping the gas inside a piston expands and traces out a trajectory through the space of equilibrium states. Therefore, the entropy change due to heat flow from the reservoir matches the entropy change of the equilibrium states. This trajectory can be traced out in reverse as the piston reverses direction, but if the external conditions are adjusted appropriately, the gas can be made to trace out a Carnot cycle in P-V space. The cycle is dynamic as opposed to quasi-static as the piston has kinetic energy equal in difference to the work performed internally and externally. Full article
(This article belongs to the Special Issue The Foundations of Thermodynamics)
Show Figures

Figure 1

Article
Accuracy-Risk Trade-Off Due to Social Learning in Crowd-Sourced Financial Predictions
Entropy 2021, 23(7), 801; https://doi.org/10.3390/e23070801 - 24 Jun 2021
Cited by 1 | Viewed by 3968
Abstract
A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the [...] Read more.
A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the trade-off between risk and accuracy at the collective level? Answers to this question will lead to designing and deploying more effective crowd-sourced financial platforms and to minimizing issues stemming from risk such as implied volatility. To investigate this trade-off, we conducted a large online Wisdom of the Crowd study where 2037 participants predicted the prices of real financial assets (S&P 500, WTI Oil and Gold prices). Using the data collected, we modeled the belief update process of participants using models inspired by Bayesian models of cognition. We show that subsets of predictions chosen based on their belief update strategies lie on a Pareto frontier between accuracy and risk, mediated by social learning. We also observe that social learning led to superior accuracy during one of our rounds that occurred during the high market uncertainty of the Brexit vote. Full article
(This article belongs to the Special Issue Swarms and Network Intelligence)
Show Figures

Figure 1

Article
Psychomotor Predictive Processing
Entropy 2021, 23(7), 806; https://doi.org/10.3390/e23070806 - 24 Jun 2021
Cited by 3 | Viewed by 2319
Abstract
Psychomotor experience can be based on what people predict they will experience, rather than on sensory inputs. It has been argued that disconnects between human experience and sensory inputs can be addressed better through further development of predictive processing theory. In this paper, [...] Read more.
Psychomotor experience can be based on what people predict they will experience, rather than on sensory inputs. It has been argued that disconnects between human experience and sensory inputs can be addressed better through further development of predictive processing theory. In this paper, the scope of predictive processing theory is extended through three developments. First, by going beyond previous studies that have encompassed embodied cognition but have not addressed some fundamental aspects of psychomotor functioning. Second, by proposing a scientific basis for explaining predictive processing that spans objective neuroscience and subjective experience. Third, by providing an explanation of predictive processing that can be incorporated into the planning and operation of systems involving robots and other new technologies. This is necessary because such systems are becoming increasingly common and move us farther away from the hunter-gatherer lifestyles within which our psychomotor functioning evolved. For example, beliefs that workplace robots are threatening can generate anxiety, while wearing hardware, such as augmented reality headsets and exoskeletons, can impede the natural functioning of psychomotor systems. The primary contribution of the paper is the introduction of a new formulation of hierarchical predictive processing that is focused on psychomotor functioning. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Article
Field Theoretical Approach for Signal Detection in Nearly Continuous Positive Spectra II: Tensorial Data
Entropy 2021, 23(7), 795; https://doi.org/10.3390/e23070795 - 23 Jun 2021
Cited by 7 | Viewed by 1405
Abstract
The tensorial principal component analysis is a generalization of ordinary principal component analysis focusing on data which are suitably described by tensors rather than matrices. This paper aims at giving the nonperturbative renormalization group formalism, based on a slight generalization of the covariance [...] Read more.
The tensorial principal component analysis is a generalization of ordinary principal component analysis focusing on data which are suitably described by tensors rather than matrices. This paper aims at giving the nonperturbative renormalization group formalism, based on a slight generalization of the covariance matrix, to investigate signal detection for the difficult issue of nearly continuous spectra. Renormalization group allows constructing an effective description keeping only relevant features in the low “energy” (i.e., large eigenvalues) limit and thus providing universal descriptions allowing to associate the presence of the signal with objectives and computable quantities. Among them, in this paper, we focus on the vacuum expectation value. We exhibit experimental evidence in favor of a connection between symmetry breaking and the existence of an intrinsic detection threshold, in agreement with our conclusions for matrices, providing a new step in the direction of a universal statement. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
The Impact of Linear Filter Preprocessing in the Interpretation of Permutation Entropy
Entropy 2021, 23(7), 787; https://doi.org/10.3390/e23070787 - 22 Jun 2021
Cited by 5 | Viewed by 1342
Abstract
Permutation Entropy (PE) is a powerful tool for measuring the amount of information contained within a time series. However, this technique is rarely applied directly on raw signals. Instead, a preprocessing step, such as linear filtering, is applied in order to remove noise [...] Read more.
Permutation Entropy (PE) is a powerful tool for measuring the amount of information contained within a time series. However, this technique is rarely applied directly on raw signals. Instead, a preprocessing step, such as linear filtering, is applied in order to remove noise or to isolate specific frequency bands. In the current work, we aimed at outlining the effect of linear filter preprocessing in the final PE values. By means of the Wiener–Khinchin theorem, we theoretically characterize the linear filter’s intrinsic PE and separated its contribution from the signal’s ordinal information. We tested these results by means of simulated signals, subject to a variety of linear filters such as the moving average, Butterworth, and Chebyshev type I. The PE results from simulations closely resembled our predicted results for all tested filters, which validated our theoretical propositions. More importantly, when we applied linear filters to signals with inner correlations, we were able to theoretically decouple the signal-specific contribution from that induced by the linear filter. Therefore, by providing a proper framework of PE linear filter characterization, we improved the PE interpretation by identifying possible artifact information introduced by the preprocessing steps. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications II)
Show Figures

Figure 1

Article
Intelligent Online Monitoring of Rolling Bearing: Diagnosis and Prognosis
Entropy 2021, 23(7), 791; https://doi.org/10.3390/e23070791 - 22 Jun 2021
Cited by 6 | Viewed by 1794
Abstract
This paper suggests a new method to predict the Remaining Useful Life (RUL) of rolling bearings based on Long Short Term Memory (LSTM), in order to obtain the degradation condition of the rolling bearings and realize the predictive maintenance. The approach is divided [...] Read more.
This paper suggests a new method to predict the Remaining Useful Life (RUL) of rolling bearings based on Long Short Term Memory (LSTM), in order to obtain the degradation condition of the rolling bearings and realize the predictive maintenance. The approach is divided into three parts: the first part is the clustering to detect the damage state by the density-based spatial clustering of applications with noise. The second one is the health indicator construction which could give a better reflection of the bearing degradation tendency and is selected as the input for the prediction model. In the third part of the RUL prediction, the LSTM approach is employed to improve the accuracy of the prediction. The rationale of this work is to combine the two methods—the density-based spatial clustering of applications with noise and LSTM—to identify the abnormal state in rolling bearings, then estimate the RUL. The suggested method is confirmed by experimental data of bearing life cycle, and the RUL prediction results of the model LSTM are compared with the nonlinear au-regressive model with exogenous input model. In addition, the constructed health indicator is compared with the spectral kurtosis feature. The results demonstrated that the suggested method is more appropriate than the nonlinear au-regressive model with exogenous input model for the prediction of bearing RUL. Full article
Show Figures

Figure 1

Article
Global Sensitivity Analysis Based on Entropy: From Differential Entropy to Alternative Measures
Entropy 2021, 23(6), 778; https://doi.org/10.3390/e23060778 - 19 Jun 2021
Cited by 7 | Viewed by 5062
Abstract
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative [...] Read more.
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative entropy, just as Sobol sensitivity analysis does not have negative variance. Entropy is similar to variance but does not have the same properties. An alternative sensitivity measure based on the approximation of the differential entropy using dome-shaped functionals with non-negative values is proposed in the article. Case studies have shown that new sensitivity measures lead to a rational structure of sensitivity indices with a significantly lower proportion of higher-order sensitivity indices compared to other types of distributional sensitivity analysis. In terms of the concept of sensitivity analysis, a decrease in variance to zero means a transition from the differential to discrete entropy. The form of this transition is an open question, which can be studied using other scientific disciplines. The search for new functionals for distributional sensitivity analysis is not closed, and other suitable sensitivity measures may be found. Full article
Show Figures

Figure 1

Article
Robust Universal Inference
Entropy 2021, 23(6), 773; https://doi.org/10.3390/e23060773 - 18 Jun 2021
Cited by 2 | Viewed by 1850
Abstract
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number [...] Read more.
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

Article
Computing Accurate Probabilistic Estimates of One-D Entropy from Equiprobable Random Samples
Entropy 2021, 23(6), 740; https://doi.org/10.3390/e23060740 - 11 Jun 2021
Cited by 3 | Viewed by 2311
Abstract
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability [...] Read more.
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.250.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as 10% and for BC as large as 50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
On Representations of Divergence Measures and Related Quantities in Exponential Families
Entropy 2021, 23(6), 726; https://doi.org/10.3390/e23060726 - 08 Jun 2021
Cited by 1 | Viewed by 1585
Abstract
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and [...] Read more.
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and mean value function. Moreover, the same applies to related entropy and affinity measures. We compile representations scattered in the literature and present a unified approach to the derivation in exponential families. As a statistical application, we highlight their use in the construction of confidence regions in a multi-sample setup. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

Article
Switching and Swapping of Quantum Information: Entropy and Entanglement Level
Entropy 2021, 23(6), 717; https://doi.org/10.3390/e23060717 - 04 Jun 2021
Cited by 1 | Viewed by 1905
Abstract
Information switching and swapping seem to be fundamental elements of quantum communication protocols. Another crucial issue is the presence of entanglement and its level in inspected quantum systems. In this article, a formal definition of the operation of the swapping local quantum information [...] Read more.
Information switching and swapping seem to be fundamental elements of quantum communication protocols. Another crucial issue is the presence of entanglement and its level in inspected quantum systems. In this article, a formal definition of the operation of the swapping local quantum information and its existence proof, together with some elementary properties analysed through the prism of the concept of the entropy, are presented. As an example of the local information swapping usage, we demonstrate a certain realisation of the quantum switch. Entanglement levels, during the work of the switch, are calculated with the Negativity measure and a separability criterion based on the von Neumann entropy, spectral decomposition and Schmidt decomposition. Results of numerical experiments, during which the entanglement levels are estimated for systems under consideration with and without distortions, are presented. The noise is generated by the Dzyaloshinskii-Moriya interaction and the intrinsic decoherence is modelled by the Milburn equation. This work contains a switch realisation in a circuit form—built out of elementary quantum gates, and a scheme of the circuit which estimates levels of entanglement during the switch’s operating. Full article
(This article belongs to the Special Issue Methods and Applications of Quantum Data Processing)
Show Figures

Figure 1

Article
A Geometric Perspective on Information Plane Analysis
Entropy 2021, 23(6), 711; https://doi.org/10.3390/e23060711 - 03 Jun 2021
Cited by 1 | Viewed by 2156
Abstract
Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are [...] Read more.
Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels. Full article
Show Figures

Figure 1

Article
Information and Self-Organization II: Steady State and Phase Transition
Entropy 2021, 23(6), 707; https://doi.org/10.3390/e23060707 - 02 Jun 2021
Cited by 9 | Viewed by 2541
Abstract
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation [...] Read more.
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation reflects the shift from the emphasis on physical principles to principles of information theory and Synergetics. In view of the expected general audience of this issue, we have chosen a somewhat tutorial style that does not require special knowledge on physics but familiarizes the reader with concepts rooted in information theory and Synergetics. Full article
(This article belongs to the Special Issue Information and Self-Organization II)
Show Figures

Figure 1

Article
Disentangling the Information in Species Interaction Networks
Entropy 2021, 23(6), 703; https://doi.org/10.3390/e23060703 - 02 Jun 2021
Viewed by 1977
Abstract
Shannon’s entropy measure is a popular means for quantifying ecological diversity. We explore how one can use information-theoretic measures (that are often called indices in ecology) on joint ensembles to study the diversity of species interaction networks. We leverage the little-known balance equation [...] Read more.
Shannon’s entropy measure is a popular means for quantifying ecological diversity. We explore how one can use information-theoretic measures (that are often called indices in ecology) on joint ensembles to study the diversity of species interaction networks. We leverage the little-known balance equation to decompose the network information into three components describing the species abundance, specificity, and redundancy. This balance reveals that there exists a fundamental trade-off between these components. The decomposition can be straightforwardly extended to analyse networks through time as well as space, leading to the corresponding notions for alpha, beta, and gamma diversity. Our work aims to provide an accessible introduction for ecologists. To this end, we illustrate the interpretation of the components on numerous real networks. The corresponding code is made available to the community in the specialised Julia package EcologicalNetworks.jl. Full article
(This article belongs to the Special Issue Information Theory-Based Approach to Assessing Ecosystem)
Show Figures

Figure 1

Article
Information Geometric Theory in the Prediction of Abrupt Changes in System Dynamics
Entropy 2021, 23(6), 694; https://doi.org/10.3390/e23060694 - 31 May 2021
Cited by 10 | Viewed by 2042
Abstract
Detection and measurement of abrupt changes in a process can provide us with important tools for decision making in systems management. In particular, it can be utilised to predict the onset of a sudden event such as a rare, extreme event which causes [...] Read more.
Detection and measurement of abrupt changes in a process can provide us with important tools for decision making in systems management. In particular, it can be utilised to predict the onset of a sudden event such as a rare, extreme event which causes the abrupt dynamical change in the system. Here, we investigate the prediction capability of information theory by focusing on how sensitive information-geometric theory (information length diagnostics) and entropy-based information theoretical method (information flow) are to abrupt changes. To this end, we utilise a non-autonomous Kramer equation by including a sudden perturbation to the system to mimic the onset of a sudden event and calculate time-dependent probability density functions (PDFs) and various statistical quantities with the help of numerical simulations. We show that information length diagnostics predict the onset of a sudden event better than the information flow. Furthermore, it is explicitly shown that the information flow like any other entropy-based measures has limitations in measuring perturbations which do not affect entropy. Full article
Show Figures

Figure 1

Article
Medium Entropy Reduction and Instability in Stochastic Systems with Distributed Delay
Entropy 2021, 23(6), 696; https://doi.org/10.3390/e23060696 - 31 May 2021
Cited by 3 | Viewed by 2606
Abstract
Many natural and artificial systems are subject to some sort of delay, which can be in the form of a single discrete delay or distributed over a range of times. Here, we discuss the impact of this distribution on (thermo-)dynamical properties of time-delayed [...] Read more.
Many natural and artificial systems are subject to some sort of delay, which can be in the form of a single discrete delay or distributed over a range of times. Here, we discuss the impact of this distribution on (thermo-)dynamical properties of time-delayed stochastic systems. To this end, we study a simple classical model with white and colored noise, and focus on the class of Gamma-distributed delays which includes a variety of distinct delay distributions typical for feedback experiments and biological systems. A physical application is a colloid subject to time-delayed feedback control, which is, in principle, experimentally realizable by co-moving optical traps. We uncover several unexpected phenomena in regard to the system’s linear stability and its thermodynamic properties. First, increasing the mean delay time can destabilize or stabilize the process, depending on the distribution of the delay. Second, for all considered distributions, the heat dissipated by the controlled system (e.g., the colloidal particle) can become negative, which implies that the delay force extracts energy and entropy of the bath. As we show here, this refrigerating effect is particularly pronounced for exponential delay. For a specific non-reciprocal realization of a control device, we find that the entropic costs, measured by the total entropy production of the system plus controller, are the lowest for exponential delay. The exponential delay further yields the largest stable parameter regions. In this sense, exponential delay represents the most effective and robust type of delayed feedback. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Graphical abstract

Article
Stochastic Thermodynamics of a Piezoelectric Energy Harvester Model
Entropy 2021, 23(6), 677; https://doi.org/10.3390/e23060677 - 27 May 2021
Cited by 5 | Viewed by 2060
Abstract
We experimentally study a piezoelectric energy harvester driven by broadband random vibrations. We show that a linear model, consisting of an underdamped Langevin equation for the dynamics of the tip mass, electromechanically coupled with a capacitor and a load resistor, can accurately describe [...] Read more.
We experimentally study a piezoelectric energy harvester driven by broadband random vibrations. We show that a linear model, consisting of an underdamped Langevin equation for the dynamics of the tip mass, electromechanically coupled with a capacitor and a load resistor, can accurately describe the experimental data. In particular, the theoretical model allows us to define fluctuating currents and to study the stochastic thermodynamics of the system, with focus on the distribution of the extracted work over different time intervals. Our analytical and numerical analysis of the linear model is succesfully compared to the experiments. Full article
Show Figures

Figure 1

Article
Socio-Economic Impact of the Covid-19 Pandemic in the U.S.
Entropy 2021, 23(6), 673; https://doi.org/10.3390/e23060673 - 27 May 2021
Cited by 12 | Viewed by 3947
Abstract
This paper proposes a dynamic cascade model to investigate the systemic risk posed by sector-level industries within the U.S. inter-industry network. We then use this model to study the effect of the disruptions presented by Covid-19 on the U.S. economy. We construct a [...] Read more.
This paper proposes a dynamic cascade model to investigate the systemic risk posed by sector-level industries within the U.S. inter-industry network. We then use this model to study the effect of the disruptions presented by Covid-19 on the U.S. economy. We construct a weighted digraph G = (V,E,W) using the industry-by-industry total requirements table for 2018, provided by the Bureau of Economic Analysis (BEA). We impose an initial shock that disrupts the production capacity of one or more industries, and we calculate the propagation of production shortages with a modified Cobb–Douglas production function. For the Covid-19 case, we model the initial shock based on the loss of labor between March and April 2020 as reported by the Bureau of Labor Statistics (BLS). The industries within the network are assigned a resilience that determines the ability of an industry to absorb input losses, such that if the rate of input loss exceeds the resilience, the industry fails, and its outputs go to zero. We observed a critical resilience, such that, below this critical value, the network experienced a catastrophic cascade resulting in total network collapse. Lastly, we model the economic recovery from June 2020 through March 2021 using BLS data. Full article
(This article belongs to the Special Issue Structures and Dynamics of Economic Complex Networks)
Show Figures

Figure 1

Article
Gradient Profile Estimation Using Exponential Cubic Spline Smoothing in a Bayesian Framework
Entropy 2021, 23(6), 674; https://doi.org/10.3390/e23060674 - 27 May 2021
Viewed by 1777
Abstract
Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking [...] Read more.
Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking the subsequent analytic derivative of the estimated system. The underlying system is often estimated by fitting or smoothing the data using other techniques. Taking the subsequent analytic derivative of an estimated function can be ill-posed. This becomes worse as the noise in the system increases. As a result, the uncertainty generated in the gradient estimate increases. In this paper, a theoretical framework for a method to estimate the gradient profile of discrete noisy data is presented. The method was developed within a Bayesian framework. Comprehensive numerical experiments were conducted on synthetic data at different levels of noise. The accuracy of the proposed method was quantified. Our findings suggest that the proposed gradient profile estimation method outperforms the state-of-the-art methods. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

Article
A Maximum Entropy Model of Bounded Rational Decision-Making with Prior Beliefs and Market Feedback
Entropy 2021, 23(6), 669; https://doi.org/10.3390/e23060669 - 26 May 2021
Cited by 6 | Viewed by 3252
Abstract
Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian [...] Read more.
Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian competition. The model explicitly captures the boundedness of agents (limited in their information-processing capacity) as the cost of information acquisition for expanding their prior beliefs. The expansion is measured as the Kullblack–Leibler divergence between posterior decisions and prior beliefs. When information acquisition is free, the homo economicus agent is recovered, while in cases when information acquisition becomes costly, agents instead revert to their prior beliefs. The maximum entropy principle is used to infer least biased decisions based upon the notion of Smithian competition formalised within the Quantal Response Statistical Equilibrium framework. The incorporation of prior beliefs into such a framework allowed us to systematically explore the effects of prior beliefs on decision-making in the presence of market feedback, as well as importantly adding a temporal interpretation to the framework. We verified the proposed model using Australian housing market data, showing how the incorporation of prior knowledge alters the resulting agent decisions. Specifically, it allowed for the separation of past beliefs and utility maximisation behaviour of the agent as well as the analysis into the evolution of agent beliefs. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Show Figures

Figure 1

Article
Ordinal Pattern Dependence in the Context of Long-Range Dependence
Entropy 2021, 23(6), 670; https://doi.org/10.3390/e23060670 - 26 May 2021
Cited by 3 | Viewed by 2047
Abstract
Ordinal pattern dependence is a multivariate dependence measure based on the co-movement of two time series. In strong connection to ordinal time series analysis, the ordinal information is taken into account to derive robust results on the dependence between the two processes. This [...] Read more.
Ordinal pattern dependence is a multivariate dependence measure based on the co-movement of two time series. In strong connection to ordinal time series analysis, the ordinal information is taken into account to derive robust results on the dependence between the two processes. This article deals with ordinal pattern dependence for a long-range dependent time series including mixed cases of short- and long-range dependence. We investigate the limit distributions for estimators of ordinal pattern dependence. In doing so, we point out the differences that arise for the underlying time series having different dependence structures. Depending on these assumptions, central and non-central limit theorems are proven. The limit distributions for the latter ones can be included in the class of multivariate Rosenblatt processes. Finally, a simulation study is provided to illustrate our theoretical findings. Full article
(This article belongs to the Special Issue Time Series Modelling)
Show Figures

Figure 1

Article
Shall I Work with Them? A Knowledge Graph-Based Approach for Predicting Future Research Collaborations
Entropy 2021, 23(6), 664; https://doi.org/10.3390/e23060664 - 25 May 2021
Cited by 2 | Viewed by 2992
Abstract
We consider the prediction of future research collaborations as a link prediction problem applied on a scientific knowledge graph. To the best of our knowledge, this is the first work on the prediction of future research collaborations that combines structural and textual information [...] Read more.
We consider the prediction of future research collaborations as a link prediction problem applied on a scientific knowledge graph. To the best of our knowledge, this is the first work on the prediction of future research collaborations that combines structural and textual information of a scientific knowledge graph through a purposeful integration of graph algorithms and natural language processing techniques. Our work: (i) investigates whether the integration of unstructured textual data into a single knowledge graph affects the performance of a link prediction model, (ii) studies the effect of previously proposed graph kernels based approaches on the performance of an ML model, as far as the link prediction problem is concerned, and (iii) proposes a three-phase pipeline that enables the exploitation of structural and textual information, as well as of pre-trained word embeddings. We benchmark the proposed approach against classical link prediction algorithms using accuracy, recall, and precision as our performance metrics. Finally, we empirically test our approach through various feature combinations with respect to the link prediction problem. Our experimentations with the new COVID-19 Open Research Dataset demonstrate a significant improvement of the abovementioned performance metrics in the prediction of future research collaborations. Full article
Show Figures

Figure 1

Article
Menzerath’s Law in the Syntax of Languages Compared with Random Sentences
Entropy 2021, 23(6), 661; https://doi.org/10.3390/e23060661 - 25 May 2021
Cited by 3 | Viewed by 1607
Abstract
The Menzerath law is considered to show an aspect of the complexity underlying natural language. This law suggests that, for a linguistic unit, the size (y) of a linguistic construct decreases as the number (x) of constructs in the [...] Read more.
The Menzerath law is considered to show an aspect of the complexity underlying natural language. This law suggests that, for a linguistic unit, the size (y) of a linguistic construct decreases as the number (x) of constructs in the unit increases. This article investigates this property syntactically, with x as the number of constituents modifying the main predicate of a sentence and y as the size of those constituents in terms of the number of words. Following previous articles that demonstrated that the Menzerath property held for dependency corpora, such as in Czech and Ukrainian, this article first examines how well the property applies across languages by using the entire Universal Dependency dataset ver. 2.3, including 76 languages over 129 corpora and the Penn Treebank (PTB). The results show that the law holds reasonably well for x>2. Then, for comparison, the property is investigated with syntactically randomized sentences generated from the PTB. These results show that the property is almost reproducible even from simple random data. Further analysis of the property highlights more detailed characteristics of natural language. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
A Thermodynamic Approach to Measuring Entropy in a Few-Electron Nanodevice
Entropy 2021, 23(6), 640; https://doi.org/10.3390/e23060640 - 21 May 2021
Cited by 9 | Viewed by 2201
Abstract
The entropy of a system gives a powerful insight into its microscopic degrees of freedom; however, standard experimental ways of measuring entropy through heat capacity are hard to apply to nanoscale systems, as they require the measurement of increasingly small amounts of heat. [...] Read more.
The entropy of a system gives a powerful insight into its microscopic degrees of freedom; however, standard experimental ways of measuring entropy through heat capacity are hard to apply to nanoscale systems, as they require the measurement of increasingly small amounts of heat. Two alternative entropy measurement methods have been recently proposed for nanodevices: through charge balance measurements and transport properties. We describe a self-consistent thermodynamic framework for applying thermodynamic relations to few-electron nanodevices—small systems, where fluctuations in particle number are significant, whilst highlighting several ongoing misconceptions. We derive a relation (a consequence of a Maxwell relation for small systems), which describes both existing entropy measurement methods as special cases, while also allowing the experimentalist to probe the intermediate regime between them. Finally, we independently prove the applicability of our framework in systems with complex microscopic dynamics—those with many excited states of various degeneracies—from microscopic considerations. Full article
(This article belongs to the Special Issue Nature of Entropy and Its Direct Metrology)
Show Figures

Figure 1

Article
Characterization of a Two-Photon Quantum Battery: Initial Conditions, Stability and Work Extraction
Entropy 2021, 23(5), 612; https://doi.org/10.3390/e23050612 - 14 May 2021
Cited by 16 | Viewed by 2312
Abstract
We consider a quantum battery that is based on a two-level system coupled with a cavity radiation by means of a two-photon interaction. Various figures of merit, such as stored energy, average charging power, energy fluctuations, and extractable work are investigated, considering, as [...] Read more.
We consider a quantum battery that is based on a two-level system coupled with a cavity radiation by means of a two-photon interaction. Various figures of merit, such as stored energy, average charging power, energy fluctuations, and extractable work are investigated, considering, as possible initial conditions for the cavity, a Fock state, a coherent state, and a squeezed state. We show that the first state leads to better performances for the battery. However, a coherent state with the same average number of photons, even if it is affected by stronger fluctuations in the stored energy, results in quite interesting performance, in particular since it allows for almost completely extracting the stored energy as usable work at short enough times. Full article
(This article belongs to the Special Issue Non-equilibrium Thermodynamics in the Quantum Regime)
Show Figures

Graphical abstract

Article
Information Structures for Causally Explainable Decisions
Entropy 2021, 23(5), 601; https://doi.org/10.3390/e23050601 - 13 May 2021
Cited by 3 | Viewed by 2405
Abstract
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link [...] Read more.
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments. Full article
Show Figures

Figure 1

Article
EEG Fractal Analysis Reflects Brain Impairment after Stroke
Entropy 2021, 23(5), 592; https://doi.org/10.3390/e23050592 - 11 May 2021
Cited by 5 | Viewed by 2334
Abstract
Stroke is the commonest cause of disability. Novel treatments require an improved understanding of the underlying mechanisms of recovery. Fractal approaches have demonstrated that a single metric can describe the complexity of seemingly random fluctuations of physiological signals. We hypothesize that fractal algorithms [...] Read more.
Stroke is the commonest cause of disability. Novel treatments require an improved understanding of the underlying mechanisms of recovery. Fractal approaches have demonstrated that a single metric can describe the complexity of seemingly random fluctuations of physiological signals. We hypothesize that fractal algorithms applied to electroencephalographic (EEG) signals may track brain impairment after stroke. Sixteen stroke survivors were studied in the hyperacute (<48 h) and in the acute phase (∼1 week after stroke), and 35 stroke survivors during the early subacute phase (from 8 days to 32 days and after ∼2 months after stroke): We compared resting-state EEG fractal changes using fractal measures (i.e., Higuchi Index, Tortuosity) with 11 healthy controls. Both Higuchi index and Tortuosity values were significantly lower after a stroke throughout the acute and early subacute stage compared to healthy subjects, reflecting a brain activity which is significantly less complex. These indices may be promising metrics to track behavioral changes in the very early stage after stroke. Our findings might contribute to the neurorehabilitation quest in identifying reliable biomarkers for a better tailoring of rehabilitation pathways. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks)
Show Figures

Graphical abstract

Article
Unveiling Operator Growth Using Spin Correlation Functions
Entropy 2021, 23(5), 587; https://doi.org/10.3390/e23050587 - 10 May 2021
Cited by 14 | Viewed by 1985
Abstract
In this paper, we study non-equilibrium dynamics induced by a sudden quench of strongly correlated Hamiltonians with all-to-all interactions. By relying on a Sachdev-Ye-Kitaev (SYK)-based quench protocol, we show that the time evolution of simple spin-spin correlation functions is highly sensitive to the [...] Read more.
In this paper, we study non-equilibrium dynamics induced by a sudden quench of strongly correlated Hamiltonians with all-to-all interactions. By relying on a Sachdev-Ye-Kitaev (SYK)-based quench protocol, we show that the time evolution of simple spin-spin correlation functions is highly sensitive to the degree of k-locality of the corresponding operators, once an appropriate set of fundamental fields is identified. By tracking the time-evolution of specific spin-spin correlation functions and their decay, we argue that it is possible to distinguish between operator-hopping and operator growth dynamics; the latter being a hallmark of quantum chaos in many-body quantum systems. Such an observation, in turn, could constitute a promising tool to probe the emergence of chaotic behavior, rather accessible in state-of-the-art quench setups. Full article
(This article belongs to the Special Issue Entropy and Complexity in Quantum Dynamics)
Show Figures

Figure 1

Review
The Phase Space Model of Nonrelativistic Quantum Mechanics
Entropy 2021, 23(5), 581; https://doi.org/10.3390/e23050581 - 08 May 2021
Viewed by 1636
Abstract
We focus on several questions arising during the modelling of quantum systems on a phase space. First, we discuss the choice of phase space and its structure. We include an interesting case of discrete phase space. Then, we introduce the respective algebras of [...] Read more.
We focus on several questions arising during the modelling of quantum systems on a phase space. First, we discuss the choice of phase space and its structure. We include an interesting case of discrete phase space. Then, we introduce the respective algebras of functions containing quantum observables. We also consider the possibility of performing strict calculations and indicate cases where only formal considerations can be performed. We analyse alternative realisations of strict and formal calculi, which are determined by different kernels. Finally, two classes of Wigner functions as representations of states are investigated. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
Review
Advances in Atomtronics
Entropy 2021, 23(5), 534; https://doi.org/10.3390/e23050534 - 27 Apr 2021
Cited by 4 | Viewed by 1342
Abstract
Atomtronics is a relatively new subfield of atomic physics that aims to realize the device behavior of electronic components in ultracold atom-optical systems. The fact that these systems are coherent makes them particularly interesting since, in addition to current, one can impart quantum [...] Read more.
Atomtronics is a relatively new subfield of atomic physics that aims to realize the device behavior of electronic components in ultracold atom-optical systems. The fact that these systems are coherent makes them particularly interesting since, in addition to current, one can impart quantum states onto the current carriers themselves or perhaps perform quantum computational operations on them. After reviewing the fundamental ideas of this subfield, we report on the theoretical and experimental progress made towards developing externally-driven and closed loop devices. The functionality and potential applications for these atom analogs to electronic and spintronic systems is also discussed. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

Article
Unifying Large- and Small-Scale Theories of Coordination
Entropy 2021, 23(5), 537; https://doi.org/10.3390/e23050537 - 27 Apr 2021
Cited by 20 | Viewed by 4341
Abstract
Coordination is a ubiquitous feature of all living things. It occurs by virtue of informational coupling among component parts and processes and can be quite specific (as when cells in the brain resonate to signals in the environment) or nonspecific (as when simple [...] Read more.
Coordination is a ubiquitous feature of all living things. It occurs by virtue of informational coupling among component parts and processes and can be quite specific (as when cells in the brain resonate to signals in the environment) or nonspecific (as when simple diffusion creates a source–sink dynamic for gene networks). Existing theoretical models of coordination—from bacteria to brains to social groups—typically focus on systems with very large numbers of elements (N→∞) or systems with only a few elements coupled together (typically N = 2). Though sharing a common inspiration in Nature’s propensity to generate dynamic patterns, both approaches have proceeded largely independent of each other. Ideally, one would like a theory that applies to phenomena observed on all scales. Recent experimental research by Mengsen Zhang and colleagues on intermediate-sized ensembles (in between the few and the many) proves to be the key to uniting large- and small-scale theories of coordination. Disorder–order transitions, multistability, order–order phase transitions, and especially metastability are shown to figure prominently on multiple levels of description, suggestive of a basic Coordination Dynamics that operates on all scales. This unified coordination dynamics turns out to be a marriage of two well-known models of large- and small-scale coordination: the former based on statistical mechanics (Kuramoto) and the latter based on the concepts of Synergetics and nonlinear dynamics (extended Haken–Kelso–Bunz or HKB). We show that models of the many and the few, previously quite unconnected, are thereby unified in a single formulation. The research has led to novel topological methods to handle the higher-dimensional dynamics of coordination in complex systems and has implications not only for understanding coordination but also for the design of (biorhythm inspired) computers. Full article
(This article belongs to the Special Issue Information and Self-Organization II)
Show Figures

Figure 1

Article
α-Geodesical Skew Divergence
Entropy 2021, 23(5), 528; https://doi.org/10.3390/e23050528 - 25 Apr 2021
Cited by 3 | Viewed by 1939
Abstract
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ, with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be [...] Read more.
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ, with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be absolutely continuous with respect to the source distribution. In this paper, an information geometric generalization of the skew divergence called the α-geodesical skew divergence is proposed, and its properties are studied. Full article
Show Figures

Figure 1

Article
Novel Features for Binary Time Series Based on Branch Length Similarity Entropy
Entropy 2021, 23(4), 480; https://doi.org/10.3390/e23040480 - 18 Apr 2021
Cited by 1 | Viewed by 1837
Abstract
Branch length similarity (BLS) entropy is defined in a network consisting of a single node and branches. In this study, we mapped the binary time-series signal to the circumference of the time circle so that the BLS entropy can be calculated for the [...] Read more.
Branch length similarity (BLS) entropy is defined in a network consisting of a single node and branches. In this study, we mapped the binary time-series signal to the circumference of the time circle so that the BLS entropy can be calculated for the binary time-series. We obtained the BLS entropy values for “1” signals on the time circle. The set of values are the BLS entropy profile. We selected the local maximum (minimum) point, slope, and inflection point of the entropy profile as the characteristic features of the binary time-series and investigated and explored their significance. The local maximum (minimum) point indicates the time at which the rate of change in the signal density becomes zero. The slope and inflection points correspond to the degree of change in the signal density and the time at which the signal density changes occur, respectively. Moreover, we show that the characteristic features can be widely used in binary time-series analysis by characterizing the movement trajectory of Caenorhabditis elegans. We also mention the problems that need to be explored mathematically in relation to the features and propose candidates for additional features based on the BLS entropy profile. Full article
Show Figures

Figure 1