Next Issue
Volume 22, December
Previous Issue
Volume 22, October
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 22, Issue 11 (November 2020) – 143 articles

Cover Story (view full-size image): The uncertainty principle is unquestionably one of the foundational pillars of quantum mechanics, marking the indeterministic nature of the microscopic world. Following Heisenberg's seminal exposition and its standardization by Kennard and Robertson as a relation for the indeterminacy of quantum states, a variety of other formulations appeared that address the diverse manifestations of quantum uncertainty. A novel uncertainty relation for errors, which also covers the standard indeterminacy relation as a special case, is reported with a focus on general POVM measurements. The use of the relation is demonstrated in familiar two-state quantum systems, in which it is found to offer virtually the tightest lower bound possible for both the errors and the state indeterminacy regarding a pair of observables. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 460 KiB  
Article
Differential Geometric Aspects of Parametric Estimation Theory for States on Finite-Dimensional C-Algebras
by Florio M. Ciaglia, Jürgen Jost and Lorenz Schwachhöfer
Entropy 2020, 22(11), 1332; https://doi.org/10.3390/e22111332 - 23 Nov 2020
Cited by 7 | Viewed by 2197
Abstract
A geometrical formulation of estimation theory for finite-dimensional C-algebras is presented. This formulation allows to deal with the classical and quantum case in a single, unifying mathematical framework. The derivation of the Cramer–Rao and Helstrom bounds for parametric statistical models with [...] Read more.
A geometrical formulation of estimation theory for finite-dimensional C-algebras is presented. This formulation allows to deal with the classical and quantum case in a single, unifying mathematical framework. The derivation of the Cramer–Rao and Helstrom bounds for parametric statistical models with discrete and finite outcome spaces is presented. Full article
(This article belongs to the Special Issue Quantum Statistical Decision and Estimation Theory)
19 pages, 540 KiB  
Article
Information Network Modeling for U.S. Banking Systemic Risk
by Giancarlo Nicola, Paola Cerchiello and Tomaso Aste
Entropy 2020, 22(11), 1331; https://doi.org/10.3390/e22111331 - 23 Nov 2020
Cited by 11 | Viewed by 3034
Abstract
In this work we investigate whether information theory measures like mutual information and transfer entropy, extracted from a bank network, Granger cause financial stress indexes like LIBOR-OIS (London Interbank Offered Rate-Overnight Index Swap) spread, STLFSI (St. Louis Fed Financial Stress Index) and USD/CHF [...] Read more.
In this work we investigate whether information theory measures like mutual information and transfer entropy, extracted from a bank network, Granger cause financial stress indexes like LIBOR-OIS (London Interbank Offered Rate-Overnight Index Swap) spread, STLFSI (St. Louis Fed Financial Stress Index) and USD/CHF (USA Dollar/Swiss Franc) exchange rate. The information theory measures are extracted from a Gaussian Graphical Model constructed from daily stock time series of the top 74 listed US banks. The graphical model is calculated with a recently developed algorithm (LoGo) which provides very fast inference model that allows us to update the graphical model each market day. We therefore can generate daily time series of mutual information and transfer entropy for each bank of the network. The Granger causality between the bank related measures and the financial stress indexes is investigated with both standard Granger-causality and Partial Granger-causality conditioned on control measures representative of the general economy conditions. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

42 pages, 1618 KiB  
Review
Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics
by Rodrigo Cofré, Cesar Maldonado and Bruno Cessac
Entropy 2020, 22(11), 1330; https://doi.org/10.3390/e22111330 - 23 Nov 2020
Cited by 6 | Viewed by 3573
Abstract
The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference [...] Read more.
The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism. Full article
(This article belongs to the Special Issue Generalized Statistical Thermodynamics)
Show Figures

Figure 1

15 pages, 412 KiB  
Article
Load-Sharing Model under Lindley Distribution and Its Parameter Estimation Using the Expectation-Maximization Algorithm
by Chanseok Park, Min Wang, Refah Mohammed Alotaibi and Hoda Rezk
Entropy 2020, 22(11), 1329; https://doi.org/10.3390/e22111329 - 22 Nov 2020
Viewed by 1994
Abstract
A load-sharing system is defined as a parallel system whose load will be redistributed to its surviving components as each of the components fails in the system. Our focus is on making statistical inference of the parameters associated with the lifetime distribution of [...] Read more.
A load-sharing system is defined as a parallel system whose load will be redistributed to its surviving components as each of the components fails in the system. Our focus is on making statistical inference of the parameters associated with the lifetime distribution of each component in the system. In this paper, we introduce a methodology which integrates the conventional procedure under the assumption of the load-sharing system being made up of fundamental hypothetical latent random variables. We then develop an expectation maximization algorithm for performing the maximum likelihood estimation of the system with Lindley-distributed component lifetimes. We adopt several standard simulation techniques to compare the performance of the proposed methodology with the Newton–Raphson-type algorithm for the maximum likelihood estimate of the parameter. Numerical results indicate that the proposed method is more effective by consistently reaching a global maximum. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 1223 KiB  
Article
A Discretization Approach for the Nonlinear Fractional Logistic Equation
by Mohammad Izadi and Hari M. Srivastava
Entropy 2020, 22(11), 1328; https://doi.org/10.3390/e22111328 - 21 Nov 2020
Cited by 29 | Viewed by 2441
Abstract
The present study aimed to develop and investigate the local discontinuous Galerkin method for the numerical solution of the fractional logistic differential equation, occurring in many biological and social science phenomena. The fractional derivative is described in the sense of Liouville-Caputo. Using the [...] Read more.
The present study aimed to develop and investigate the local discontinuous Galerkin method for the numerical solution of the fractional logistic differential equation, occurring in many biological and social science phenomena. The fractional derivative is described in the sense of Liouville-Caputo. Using the upwind numerical fluxes, the numerical stability of the method is proved in the L norm. With the aid of the shifted Legendre polynomials, the weak form is reduced into a system of the algebraic equations to be solved in each subinterval. Furthermore, to handle the nonlinear term, the technique of product approximation is utilized. The utility of the present discretization technique and some well-known standard schemes is checked through numerical calculations on a range of linear and nonlinear problems with analytical solutions. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

20 pages, 328 KiB  
Article
Strongly Convex Divergences
by James Melbourne
Entropy 2020, 22(11), 1327; https://doi.org/10.3390/e22111327 - 21 Nov 2020
Cited by 5 | Viewed by 1935
Abstract
We consider a sub-class of the f-divergences satisfying a stronger convexity property, which we refer to as strongly convex, or κ-convex divergences. We derive new and old relationships, based on convexity arguments, between popular f-divergences. Full article
25 pages, 1361 KiB  
Article
Diffusion Limitations and Translocation Barriers in Atomically Thin Biomimetic Pores
by Subin Sahu and Michael Zwolak
Entropy 2020, 22(11), 1326; https://doi.org/10.3390/e22111326 - 20 Nov 2020
Cited by 3 | Viewed by 2748
Abstract
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors [...] Read more.
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors come into play. Ion dynamics both inside and outside the geometric volume of the pore can be critical in determining the transport properties of the channel due to several commensurate length scales, such as the effective membrane thickness, radii of the first and the second hydration layers, pore radius, and Debye length. In particular, for biomimetic pores, such as the graphene crown ether we examine here, there are regimes where transport is highly sensitive to the pore size due to the interplay of dehydration and interaction with pore charge. Picometer changes in the size, e.g., due to a minute strain, can lead to a large change in conductance. Outside of these regimes, the small pore size itself gives a large resistance, even when electrostatic factors and dehydration compensate each other to give a relatively flat—e.g., near barrierless—free energy landscape. The permeability, though, can still be large and ions will translocate rapidly after they arrive within the capture radius of the pore. This, in turn, leads to diffusion and drift effects dominating the conductance. The current thus plateaus and becomes effectively independent of pore-free energy characteristics. Measurement of this effect will give an estimate of the magnitude of kinetically limiting features, and experimentally constrain the local electromechanical conditions. Full article
Show Figures

Figure 1

44 pages, 775 KiB  
Article
Bottleneck Problems: An Information and Estimation-Theoretic View
by Shahab Asoodeh and Flavio P. Calmon
Entropy 2020, 22(11), 1325; https://doi.org/10.3390/e22111325 - 20 Nov 2020
Cited by 9 | Viewed by 2556
Abstract
Information bottleneck (IB) and privacy funnel (PF) are two closely related optimization problems which have found applications in machine learning, design of privacy algorithms, capacity problems (e.g., Mrs. Gerber’s Lemma), and strong data processing inequalities, among others. In this work, we first investigate [...] Read more.
Information bottleneck (IB) and privacy funnel (PF) are two closely related optimization problems which have found applications in machine learning, design of privacy algorithms, capacity problems (e.g., Mrs. Gerber’s Lemma), and strong data processing inequalities, among others. In this work, we first investigate the functional properties of IB and PF through a unified theoretical framework. We then connect them to three information-theoretic coding problems, namely hypothesis testing against independence, noisy source coding, and dependence dilution. Leveraging these connections, we prove a new cardinality bound on the auxiliary variable in IB, making its computation more tractable for discrete random variables. In the second part, we introduce a general family of optimization problems, termed “bottleneck problems”, by replacing mutual information in IB and PF with other notions of mutual information, namely f-information and Arimoto’s mutual information. We then argue that, unlike IB and PF, these problems lead to easily interpretable guarantees in a variety of inference tasks with statistical constraints on accuracy and privacy. While the underlying optimization problems are non-convex, we develop a technique to evaluate bottleneck problems in closed form by equivalently expressing them in terms of lower convex or upper concave envelope of certain functions. By applying this technique to a binary case, we derive closed form expressions for several bottleneck problems. Full article
Show Figures

Figure 1

38 pages, 549 KiB  
Article
Extending Fibre Nonlinear Interference Power Modelling to Account for General Dual-Polarisation 4D Modulation Formats
by Gabriele Liga, Astrid Barreiro, Hami Rabbani and Alex Alvarado
Entropy 2020, 22(11), 1324; https://doi.org/10.3390/e22111324 - 20 Nov 2020
Cited by 20 | Viewed by 2350
Abstract
In optical communications, four-dimensional (4D) modulation formats encode information onto the quadrature components of two arbitrary orthogonal states of polarisation of the optical field. Many analytical models available in the optical communication literature allow, within a first-order perturbation framework, the computation of the [...] Read more.
In optical communications, four-dimensional (4D) modulation formats encode information onto the quadrature components of two arbitrary orthogonal states of polarisation of the optical field. Many analytical models available in the optical communication literature allow, within a first-order perturbation framework, the computation of the average power of the nonlinear interference (NLI) accumulated in coherent fibre-optic transmission systems. However, all such models only operate under the assumption of transmitted polarisation-multiplexed two-dimensional (PM-2D) modulation formats, which only represent a limited subset of the possible dual-polarisation 4D (DP-4D) formats. Namely, only those where data transmitted on each polarisation channel are mutually independent and identically distributed. This paper presents a step-by-step mathematical derivation of the extension of existing NLI models to the class of arbitrary DP-4D modulation formats. In particular, the methodology adopted follows the one of the popular enhanced Gaussian noise model, albeit dropping most assumptions on the geometry and statistic of the transmitted 4D modulation format. The resulting expressions show that, whilst in the PM-2D case the NLI power depends only on different statistical high-order moments of each polarisation component, for a general DP-4D constellation, several other cross-polarisation correlations also need to be taken into account. Full article
(This article belongs to the Special Issue Information Theory of Optical Fiber)
Show Figures

Figure 1

14 pages, 2002 KiB  
Article
Optimization, Stability, and Entropy in Endoreversible Heat Engines
by Julian Gonzalez-Ayala, José Miguel Mateos Roco, Alejandro Medina and Antonio Calvo Hernández
Entropy 2020, 22(11), 1323; https://doi.org/10.3390/e22111323 - 20 Nov 2020
Cited by 17 | Viewed by 2444
Abstract
The stability of endoreversible heat engines has been extensively studied in the literature. In this paper, an alternative dynamic equations system was obtained by using restitution forces that bring the system back to the stationary state. The departing point is the assumption that [...] Read more.
The stability of endoreversible heat engines has been extensively studied in the literature. In this paper, an alternative dynamic equations system was obtained by using restitution forces that bring the system back to the stationary state. The departing point is the assumption that the system has a stationary fixed point, along with a Taylor expansion in the first order of the input/output heat fluxes, without further specifications regarding the properties of the working fluid or the heat device specifications. Specific cases of the Newton and the phenomenological heat transfer laws in a Carnot-like heat engine model were analyzed. It was shown that the evolution of the trajectories toward the stationary state have relevant consequences on the performance of the system. A major role was played by the symmetries/asymmetries of the conductance ratio σhc of the heat transfer law associated with the input/output heat exchanges. Accordingly, three main behaviors were observed: (1) For small σhc values, the thermodynamic trajectories evolved near the endoreversible limit, improving the efficiency and power output values with a decrease in entropy generation; (2) for large σhc values, the thermodynamic trajectories evolved either near the Pareto front or near the endoreversible limit, and in both cases, they improved the efficiency and power values with a decrease in entropy generation; (3) for the symmetric case (σhc=1), the trajectories evolved either with increasing entropy generation tending toward the Pareto front or with a decrease in entropy generation tending toward the endoreversible limit. Moreover, it was shown that the total entropy generation can define a time scale for both the operation cycle time and the relaxation characteristic time. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

16 pages, 2315 KiB  
Article
Fractional Dynamics Identification via Intelligent Unpacking of the Sample Autocovariance Function by Neural Networks
by Dawid Szarek, Grzegorz Sikora, Michał Balcerek, Ireneusz Jabłoński and Agnieszka Wyłomańska
Entropy 2020, 22(11), 1322; https://doi.org/10.3390/e22111322 - 20 Nov 2020
Cited by 5 | Viewed by 3047
Abstract
Many single-particle tracking data related to the motion in crowded environments exhibit anomalous diffusion behavior. This phenomenon can be described by different theoretical models. In this paper, fractional Brownian motion (FBM) was examined as the exemplary Gaussian process with fractional dynamics. The autocovariance [...] Read more.
Many single-particle tracking data related to the motion in crowded environments exhibit anomalous diffusion behavior. This phenomenon can be described by different theoretical models. In this paper, fractional Brownian motion (FBM) was examined as the exemplary Gaussian process with fractional dynamics. The autocovariance function (ACVF) is a function that determines completely the Gaussian process. In the case of experimental data with anomalous dynamics, the main problem is first to recognize the type of anomaly and then to reconstruct properly the physical rules governing such a phenomenon. The challenge is to identify the process from short trajectory inputs. Various approaches to address this problem can be found in the literature, e.g., theoretical properties of the sample ACVF for a given process. This method is effective; however, it does not utilize all of the information contained in the sample ACVF for a given trajectory, i.e., only values of statistics for selected lags are used for identification. An evolution of this approach is proposed in this paper, where the process is determined based on the knowledge extracted from the ACVF. The designed method is intuitive and it uses information directly available in a new fashion. Moreover, the knowledge retrieval from the sample ACVF vector is enhanced with a learning-based scheme operating on the most informative subset of available lags, which is proven to be an effective encoder of the properties inherited in complex data. Finally, the robustness of the proposed algorithm for FBM is demonstrated with the use of Monte Carlo simulations. Full article
(This article belongs to the Special Issue Recent Advances in Single-Particle Tracking: Experiment and Analysis)
Show Figures

Figure 1

13 pages, 2709 KiB  
Article
Scattering as a Quantum Metrology Problem: A Quantum Walk Approach
by Francesco Zatelli, Claudia Benedetti and Matteo G. A. Paris
Entropy 2020, 22(11), 1321; https://doi.org/10.3390/e22111321 - 19 Nov 2020
Cited by 8 | Viewed by 2670
Abstract
We address the scattering of a quantum particle by a one-dimensional barrier potential over a set of discrete positions. We formalize the problem as a continuous-time quantum walk on a lattice with an impurity and use the quantum Fisher information as a means [...] Read more.
We address the scattering of a quantum particle by a one-dimensional barrier potential over a set of discrete positions. We formalize the problem as a continuous-time quantum walk on a lattice with an impurity and use the quantum Fisher information as a means to quantify the maximal possible accuracy in the estimation of the height of the barrier. We introduce suitable initial states of the walker and derive the reflection and transmission probabilities of the scattered state. We show that while the quantum Fisher information is affected by the width and central momentum of the initial wave packet, this dependency is weaker for the quantum signal-to-noise ratio. We also show that a dichotomic position measurement provides a nearly optimal detection scheme. Full article
(This article belongs to the Special Issue Transport and Diffusion in Quantum Complex Systems)
Show Figures

Figure 1

15 pages, 1077 KiB  
Article
Excitation Dynamics in Chain-Mapped Environments
by Dario Tamascelli
Entropy 2020, 22(11), 1320; https://doi.org/10.3390/e22111320 - 19 Nov 2020
Cited by 8 | Viewed by 1805
Abstract
The chain mapping of structured environments is a most powerful tool for the simulation of open quantum system dynamics. Once the environmental bosonic or fermionic degrees of freedom are unitarily rearranged into a one dimensional structure, the full power of Density Matrix Renormalization [...] Read more.
The chain mapping of structured environments is a most powerful tool for the simulation of open quantum system dynamics. Once the environmental bosonic or fermionic degrees of freedom are unitarily rearranged into a one dimensional structure, the full power of Density Matrix Renormalization Group (DMRG) can be exploited. Beside resulting in efficient and numerically exact simulations of open quantum systems dynamics, chain mapping provides an unique perspective on the environment: the interaction between the system and the environment creates perturbations that travel along the one dimensional environment at a finite speed, thus providing a natural notion of light-, or causal-, cone. In this work we investigate the transport of excitations in a chain-mapped bosonic environment. In particular, we explore the relation between the environmental spectral density shape, parameters and temperature, and the dynamics of excitations along the corresponding linear chains of quantum harmonic oscillators. Our analysis unveils fundamental features of the environment evolution, such as localization, percolation and the onset of stationary currents. Full article
(This article belongs to the Special Issue Transport and Diffusion in Quantum Complex Systems)
Show Figures

Figure 1

11 pages, 892 KiB  
Article
Cluster Structure of Optimal Solutions in Bipartitioning of Small Worlds
by Adam Lipowski, António L. Ferreira and Dorota Lipowska
Entropy 2020, 22(11), 1319; https://doi.org/10.3390/e22111319 - 19 Nov 2020
Cited by 1 | Viewed by 1599
Abstract
Using simulated annealing, we examine a bipartitioning of small worlds obtained by adding a fraction of randomly chosen links to a one-dimensional chain or a square lattice. Models defined on small worlds typically exhibit a mean-field behavior, regardless of the underlying lattice. Our [...] Read more.
Using simulated annealing, we examine a bipartitioning of small worlds obtained by adding a fraction of randomly chosen links to a one-dimensional chain or a square lattice. Models defined on small worlds typically exhibit a mean-field behavior, regardless of the underlying lattice. Our work demonstrates that the bipartitioning of small worlds does depend on the underlying lattice. Simulations show that for one-dimensional small worlds, optimal partitions are finite size clusters for any fraction of additional links. In the two-dimensional case, we observe two regimes: when the fraction of additional links is sufficiently small, the optimal partitions have a stripe-like shape, which is lost for a larger number of additional links as optimal partitions become disordered. Some arguments, which interpret additional links as thermal excitations and refer to the thermodynamics of Ising models, suggest a qualitative explanation of such a behavior. The histogram of overlaps suggests that a replica symmetry is broken in a one-dimensional small world. In the two-dimensional case, the replica symmetry seems to hold, but with some additional degeneracy of stripe-like partitions. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex Systems)
Show Figures

Figure 1

12 pages, 17605 KiB  
Article
Effect of Enhanced Gravity on the Microstructure and Mechanical Properties of Al0.9CoCrFeNi High-Entropy Alloy
by Anjun Shi, Ruixuan Li, Yong Zhang, Zhe Wang and Zhancheng Guo
Entropy 2020, 22(11), 1318; https://doi.org/10.3390/e22111318 - 19 Nov 2020
Cited by 2 | Viewed by 2042
Abstract
The influence of enhanced gravity on the microstructure and mechanical properties of the Al0.9CoCrFeNi high-entropy alloy, which was solidified under normal gravity (acceleration 1 g) and enhanced gravity (acceleration 140 g, acceleration 210 g, and acceleration 360 g) conditions is reported [...] Read more.
The influence of enhanced gravity on the microstructure and mechanical properties of the Al0.9CoCrFeNi high-entropy alloy, which was solidified under normal gravity (acceleration 1 g) and enhanced gravity (acceleration 140 g, acceleration 210 g, and acceleration 360 g) conditions is reported in this paper. Its solidification under enhanced gravity fields resulted in refinement of the columnar nondendritic grain structure and an increase in the area fraction of the body-centered cubic (BCC) structure phases. The mass transfer strengthened by enhanced gravity promoted element diffusion and enrichment, which caused changes in the composition and microstructure that, in turn, affected the mechanical properties of the alloy. The compressive strength and plasticity of the sample solidified at acceleration 360 g were equal to 2845 MPa and 36.4%, respectively, which are the highest values reported to date for Al0.9CoCrFeNi alloy. Full article
Show Figures

Figure 1

16 pages, 858 KiB  
Article
Look at Tempered Subdiffusion in a Conjugate Map: Desire for the Confinement
by Aleksander Stanislavsky and Aleksander Weron
Entropy 2020, 22(11), 1317; https://doi.org/10.3390/e22111317 - 18 Nov 2020
Cited by 3 | Viewed by 2134
Abstract
The Laplace distribution of random processes was observed in numerous situations that include glasses, colloidal suspensions, live cells, and firm growth. Its origin is not so trivial as in the case of Gaussian distribution, supported by the central limit theorem. Sums of Laplace [...] Read more.
The Laplace distribution of random processes was observed in numerous situations that include glasses, colloidal suspensions, live cells, and firm growth. Its origin is not so trivial as in the case of Gaussian distribution, supported by the central limit theorem. Sums of Laplace distributed random variables are not Laplace distributed. We discovered a new mechanism leading to the Laplace distribution of observable values. This mechanism changes the contribution ratio between a jump and a continuous parts of random processes. Our concept uses properties of Bernstein functions and subordinators connected with them. Full article
(This article belongs to the Special Issue Recent Advances in Single-Particle Tracking: Experiment and Analysis)
Show Figures

Figure 1

26 pages, 4969 KiB  
Article
Thermodynamic Evaluation and Sensitivity Analysis of a Novel Compressed Air Energy Storage System Incorporated with a Coal-Fired Power Plant
by Peiyuan Pan, Meiyan Zhang, Weike Peng, Heng Chen, Gang Xu and Tong Liu
Entropy 2020, 22(11), 1316; https://doi.org/10.3390/e22111316 - 18 Nov 2020
Cited by 19 | Viewed by 2439
Abstract
A novel compressed air energy storage (CAES) system has been developed, which is innovatively integrated with a coal-fired power plant based on its feedwater heating system. In the hybrid design, the compression heat of the CAES system is transferred to the feedwater of [...] Read more.
A novel compressed air energy storage (CAES) system has been developed, which is innovatively integrated with a coal-fired power plant based on its feedwater heating system. In the hybrid design, the compression heat of the CAES system is transferred to the feedwater of the coal power plant, and the compressed air before the expanders is heated by the feedwater taken from the coal power plant. Furthermore, the exhaust air of the expanders is employed to warm partial feedwater of the coal power plant. Via the suggested integration, the thermal energy storage equipment for a regular CAES system can be eliminated and the performance of the CAES system can be improved. Based on a 350 MW supercritical coal power plant, the proposed concept was thermodynamically evaluated, and the results indicate that the round-trip efficiency and exergy efficiency of the new CAES system can reach 64.08% and 70.01%, respectively. Besides, a sensitivity analysis was conducted to examine the effects of ambient temperature, air storage pressure, expander inlet temperature, and coal power load on the performance of the CAES system. The above work proves that the novel design is efficient under various conditions, providing important insights into the development of CAES technology. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Show Figures

Figure 1

15 pages, 1035 KiB  
Article
Entropy Ratio and Entropy Concentration Coefficient, with Application to the COVID-19 Pandemic
by Christoph Bandt
Entropy 2020, 22(11), 1315; https://doi.org/10.3390/e22111315 - 18 Nov 2020
Cited by 12 | Viewed by 2836
Abstract
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient [...] Read more.
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient C=1U. The latter is a multiplicative version of the Kullback-Leibler distance, with values between 0 and 1. For product measures and self-similar phenomena, it does not depend on the measurement level. Hence, C is an alternative to Gini’s concentration coefficient for measures with variation on different levels. Simple examples concern population density and gross domestic product. Application to time series patterns is indicated with a Markov chain. For the Covid-19 pandemic, entropy ratios indicate a homogeneous distribution of infections and the potential of local action when compared to measures for a whole region. Full article
(This article belongs to the Special Issue Information theory and Symbolic Analysis: Theory and Applications)
Show Figures

Figure 1

26 pages, 5037 KiB  
Article
Personalized Image Classification by Semantic Embedding and Active Learning
by Mofei Song
Entropy 2020, 22(11), 1314; https://doi.org/10.3390/e22111314 - 18 Nov 2020
Cited by 2 | Viewed by 1890
Abstract
Currently, deep learning has shown state-of-the-art performance in image classification with pre-defined taxonomy. However, in a more real-world scenario, different users usually have different classification intents given an image collection. To satisfactorily personalize the requirement, we propose an interactive image classification system with [...] Read more.
Currently, deep learning has shown state-of-the-art performance in image classification with pre-defined taxonomy. However, in a more real-world scenario, different users usually have different classification intents given an image collection. To satisfactorily personalize the requirement, we propose an interactive image classification system with an offline representation learning stage and an online classification stage. During the offline stage, we learn a deep model to extract the feature with higher flexibility and scalability for different users’ preferences. Instead of training the model only with the inter-class discrimination, we also encode the similarity between the semantic-embedding vectors of the category labels into the model. This makes the extracted feature adapt to multiple taxonomies with different granularities. During the online session, an annotation task iteratively alternates with a high-throughput verification task. When performing the verification task, the users are only required to indicate the incorrect prediction without giving the exact category label. For each iteration, our system chooses the images to be annotated or verified based on interactive efficiency optimization. To provide a high interactive rate, a unified active learning algorithm is used to search the optimal annotation and verification set by minimizing the expected time cost. After interactive annotation and verification, the new classified images are used to train a customized classifier online, which reflects the user-adaptive intent of categorization. The learned classifier is then used for subsequent annotation and verification tasks. Experimental results under several public image datasets show that our method outperforms existing methods. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Graphical abstract

21 pages, 686 KiB  
Article
Data-Driven Corrections of Partial Lotka–Volterra Models
by Rebecca E. Morrison
Entropy 2020, 22(11), 1313; https://doi.org/10.3390/e22111313 - 18 Nov 2020
Cited by 3 | Viewed by 2202
Abstract
In many applications of interacting systems, we are only interested in the dynamic behavior of a subset of all possible active species. For example, this is true in combustion models (many transient chemical species are not of interest in a given reaction) and [...] Read more.
In many applications of interacting systems, we are only interested in the dynamic behavior of a subset of all possible active species. For example, this is true in combustion models (many transient chemical species are not of interest in a given reaction) and in epidemiological models (only certain subpopulations are consequential). Thus, it is common to use greatly reduced or partial models in which only the interactions among the species of interest are known. In this work, we explore the use of an embedded, sparse, and data-driven discrepancy operator to augment these partial interaction models. Preliminary results show that the model error caused by severe reductions—e.g., elimination of hundreds of terms—can be captured with sparse operators, built with only a small fraction of that number. The operator is embedded within the differential equations of the model, which allows the action of the operator to be interpretable. Moreover, it is constrained by available physical information and calibrated over many scenarios. These qualities of the discrepancy model—interpretability, physical consistency, and robustness to different scenarios—are intended to support reliable predictions under extrapolative conditions. Full article
Show Figures

Figure 1

17 pages, 406 KiB  
Article
Monitoring Volatility Change for Time Series Based on Support Vector Regression
by Sangyeol Lee, Chang Kyeom Kim and Dongwuk Kim
Entropy 2020, 22(11), 1312; https://doi.org/10.3390/e22111312 - 17 Nov 2020
Cited by 11 | Viewed by 1895
Abstract
This paper considers monitoring an anomaly from sequentially observed time series with heteroscedastic conditional volatilities based on the cumulative sum (CUSUM) method combined with support vector regression (SVR). The proposed online monitoring process is designed to detect a significant change in volatility of [...] Read more.
This paper considers monitoring an anomaly from sequentially observed time series with heteroscedastic conditional volatilities based on the cumulative sum (CUSUM) method combined with support vector regression (SVR). The proposed online monitoring process is designed to detect a significant change in volatility of financial time series. The tuning parameters are optimally chosen using particle swarm optimization (PSO). We conduct Monte Carlo simulation experiments to illustrate the validity of the proposed method. A real data analysis with the S&P 500 index, Korea Composite Stock Price Index (KOSPI), and the stock price of Microsoft Corporation is presented to demonstrate the versatility of our model. Full article
(This article belongs to the Special Issue Theory and Applications of Information Theoretic Machine Learning)
Show Figures

Figure 1

12 pages, 1940 KiB  
Article
Giant Spin Current Rectification Due to the Interplay of Negative Differential Conductance and a Non-Uniform Magnetic Field
by Kang Hao Lee, Vinitha Balachandran, Ryan Tan, Chu Guo and Dario Poletti
Entropy 2020, 22(11), 1311; https://doi.org/10.3390/e22111311 - 17 Nov 2020
Cited by 7 | Viewed by 1640
Abstract
In XXZ chains with large enough interactions, spin transport can be significantly suppressed when the bias of the dissipative driving becomes large enough. This phenomenon of negative differential conductance is caused by the formation of two oppositely polarized ferromagnetic domains at the edges [...] Read more.
In XXZ chains with large enough interactions, spin transport can be significantly suppressed when the bias of the dissipative driving becomes large enough. This phenomenon of negative differential conductance is caused by the formation of two oppositely polarized ferromagnetic domains at the edges of the chain. Here, we show that this many-body effect, combined with a non-uniform magnetic field, can allow for a high degree of control of the spin current. In particular, by studying all of the possible shapes of local magnetic fields potentials, we find that a configuration in which the magnetic field points up for half of the chain and down for the other half, can result in giant spin-current rectification, for example, up to 108 for a system with only 8 spins. Our results show clear indications that the rectification can increase with the system size. Full article
(This article belongs to the Special Issue Dynamics of Many-Body Quantum Systems)
Show Figures

Figure 1

21 pages, 5379 KiB  
Article
How to Utilize My App Reviews? A Novel Topics Extraction Machine Learning Schema for Strategic Business Purposes
by Ioannis Triantafyllou, Ioannis C. Drivas and Georgios Giannakopoulos
Entropy 2020, 22(11), 1310; https://doi.org/10.3390/e22111310 - 17 Nov 2020
Cited by 5 | Viewed by 3134
Abstract
Acquiring knowledge about users’ opinion and what they say regarding specific features within an app, constitutes a solid steppingstone for understanding their needs and concerns. App review utilization helps project management teams to identify threads and opportunities for app software maintenance, optimization and [...] Read more.
Acquiring knowledge about users’ opinion and what they say regarding specific features within an app, constitutes a solid steppingstone for understanding their needs and concerns. App review utilization helps project management teams to identify threads and opportunities for app software maintenance, optimization and strategic marketing purposes. Nevertheless, app user review classification for identifying valuable gems of information for app software improvement, is a complex and multidimensional issue. It requires foresight and multiple combinations of sophisticated text pre-processing, feature extraction and machine learning methods to efficiently classify app reviews into specific topics. Against this backdrop, we propose a novel feature engineering classification schema that is capable to identify more efficiently and earlier terms-words within reviews that could be classified into specific topics. For this reason, we present a novel feature extraction method, the DEVMAX.DF combined with different machine learning algorithms to propose a solution in app review classification problems. One step further, a simulation of a real case scenario takes place to validate the effectiveness of the proposed classification schema into different apps. After multiple experiments, results indicate that the proposed schema outperforms other term extraction methods such as TF.IDF and χ2 to classify app reviews into topics. To this end, the paper contributes to the knowledge expansion of research and practitioners with the purpose to reinforce their decision-making process within the realm of app reviews utilization. Full article
(This article belongs to the Special Issue Statistical Machine Learning for Multimodal Data Analysis)
Show Figures

Figure 1

19 pages, 941 KiB  
Article
Generating Artificial Reverberation via Genetic Algorithms for Real-Time Applications
by Edward Ly and Julián Villegas
Entropy 2020, 22(11), 1309; https://doi.org/10.3390/e22111309 - 17 Nov 2020
Cited by 2 | Viewed by 2828
Abstract
We introduce a Virtual Studio Technology (VST) 2 audio effect plugin that performs convolution reverb using synthetic Room Impulse Responses (RIRs) generated via a Genetic Algorithm (GA). The parameters of the plugin include some of those defined under the ISO 3382-1 standard (e.g., [...] Read more.
We introduce a Virtual Studio Technology (VST) 2 audio effect plugin that performs convolution reverb using synthetic Room Impulse Responses (RIRs) generated via a Genetic Algorithm (GA). The parameters of the plugin include some of those defined under the ISO 3382-1 standard (e.g., reverberation time, early decay time, and clarity), which are used to determine the fitness values of potential RIRs so that the user has some control over the shape of the resulting RIRs. In the GA, these RIRs are initially generated via a custom Gaussian noise method, and then evolve via truncation selection, random weighted average crossover, and mutation via Gaussian multiplication in order to produce RIRs that resemble real-world, recorded ones. Binaural Room Impulse Responses (BRIRs) can also be generated by assigning two different RIRs to the left and right stereo channels. With the proposed audio effect, new RIRs that represent virtual rooms, some of which may even be impossible to replicate in the physical world, can be generated and stored. Objective evaluation of the GA shows that contradictory combinations of parameter values will produce RIRs with low fitness. Additionally, through subjective evaluation, it was determined that RIRs generated by the GA were still perceptually distinguishable from similar real-world RIRs, but the perceptual differences were reduced when longer execution times were used for generating the RIRs or the unprocessed audio signals were comprised of only speech. Full article
Show Figures

Figure 1

21 pages, 1353 KiB  
Article
Kullback–Leibler Divergence of a Freely Cooling Granular Gas
by Alberto Megías and Andrés Santos
Entropy 2020, 22(11), 1308; https://doi.org/10.3390/e22111308 - 17 Nov 2020
Cited by 6 | Viewed by 2037
Abstract
Finding the proper entropy-like Lyapunov functional associated with the inelastic Boltzmann equation for an isolated freely cooling granular gas is a still unsolved challenge. The original H-theorem hypotheses do not fit here and the H-functional presents some additional measure problems that [...] Read more.
Finding the proper entropy-like Lyapunov functional associated with the inelastic Boltzmann equation for an isolated freely cooling granular gas is a still unsolved challenge. The original H-theorem hypotheses do not fit here and the H-functional presents some additional measure problems that are solved by the Kullback–Leibler divergence (KLD) of a reference velocity distribution function from the actual distribution. The right choice of the reference distribution in the KLD is crucial for the latter to qualify or not as a Lyapunov functional, the asymptotic “homogeneous cooling state” (HCS) distribution being a potential candidate. Due to the lack of a formal proof far from the quasielastic limit, the aim of this work is to support this conjecture aided by molecular dynamics simulations of inelastic hard disks and spheres in a wide range of values for the coefficient of restitution (α) and for different initial conditions. Our results reject the Maxwellian distribution as a possible reference, whereas they reinforce the HCS one. Moreover, the KLD is used to measure the amount of information lost on using the former rather than the latter, revealing a non-monotonic dependence with α. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

28 pages, 881 KiB  
Article
Estimation of the Reliability of a Stress–Strength System from Poisson Half Logistic Distribution
by Isyaku Muhammad, Xingang Wang, Changyou Li, Mingming Yan and Miaoxin Chang
Entropy 2020, 22(11), 1307; https://doi.org/10.3390/e22111307 - 17 Nov 2020
Cited by 12 | Viewed by 2087
Abstract
This paper discussed the estimation of stress-strength reliability parameter R=P(Y<X) based on complete samples when the stress-strength are two independent Poisson half logistic random variables (PHLD). We have addressed the estimation of R in the general [...] Read more.
This paper discussed the estimation of stress-strength reliability parameter R=P(Y<X) based on complete samples when the stress-strength are two independent Poisson half logistic random variables (PHLD). We have addressed the estimation of R in the general case and when the scale parameter is common. The classical and Bayesian estimation (BE) techniques of R are studied. The maximum likelihood estimator (MLE) and its asymptotic distributions are obtained; an approximate asymptotic confidence interval of R is computed using the asymptotic distribution. The non-parametric percentile bootstrap and student’s bootstrap confidence interval of R are discussed. The Bayes estimators of R are computed using a gamma prior and discussed under various loss functions such as the square error loss function (SEL), absolute error loss function (AEL), linear exponential error loss function (LINEX), generalized entropy error loss function (GEL) and maximum a posteriori (MAP). The Metropolis–Hastings algorithm is used to estimate the posterior distributions of the estimators of R. The highest posterior density (HPD) credible interval is constructed based on the SEL. Monte Carlo simulations are used to numerically analyze the performance of the MLE and Bayes estimators, the results were quite satisfactory based on their mean square error (MSE) and confidence interval. Finally, we used two real data studies to demonstrate the performance of the proposed estimation techniques in practice and to illustrate how PHLD is a good candidate in reliability studies. Full article
Show Figures

Figure 1

17 pages, 11242 KiB  
Article
Noise Reduction in Spur Gear Systems
by Aurelio Liguori, Enrico Armentani, Alcide Bertocco, Andrea Formato, Arcangelo Pellegrino and Francesco Villecco
Entropy 2020, 22(11), 1306; https://doi.org/10.3390/e22111306 - 16 Nov 2020
Cited by 26 | Viewed by 3143
Abstract
This article lists some tips for reducing gear case noise. With this aim, a static analysis was carried out in order to describe how stresses resulting from meshing gears affect the acoustic emissions. Different parameters were taken into account, such as the friction, [...] Read more.
This article lists some tips for reducing gear case noise. With this aim, a static analysis was carried out in order to describe how stresses resulting from meshing gears affect the acoustic emissions. Different parameters were taken into account, such as the friction, material, and lubrication, in order to validate ideas from the literature and to make several comparisons. Furthermore, a coupled Eulerian–Lagrangian (CEL) analysis was performed, which was an innovative way of evaluating the sound pressure level of the aforementioned gears. Different parameters were considered again, such as the friction, lubrication, material, and rotational speed, in order to make different research comparisons. The analytical results agreed with those in the literature, both for the static analysis and CEL analysis—for example, it was shown that changing the material from steel to ductile iron improved the gear noise, while increasing the rotational speed or the friction increased the acoustic emissions. Regarding the CEL analysis, air was considered a perfect gas, but its viscosity or another state equation could have also been taken into account. Therefore, the above allowed us to state that research into these scientific fields will bring about reliable results. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines II)
Show Figures

Figure 1

19 pages, 2293 KiB  
Article
Dissipative Structures, Organisms and Evolution
by Dilip K Kondepudi, Benjamin De Bari and James A. Dixon
Entropy 2020, 22(11), 1305; https://doi.org/10.3390/e22111305 - 16 Nov 2020
Cited by 23 | Viewed by 5422
Abstract
Self-organization in nonequilibrium systems has been known for over 50 years. Under nonequilibrium conditions, the state of a system can become unstable and a transition to an organized structure can occur. Such structures include oscillating chemical reactions and spatiotemporal patterns in chemical and [...] Read more.
Self-organization in nonequilibrium systems has been known for over 50 years. Under nonequilibrium conditions, the state of a system can become unstable and a transition to an organized structure can occur. Such structures include oscillating chemical reactions and spatiotemporal patterns in chemical and other systems. Because entropy and free-energy dissipating irreversible processes generate and maintain these structures, these have been called dissipative structures. Our recent research revealed that some of these structures exhibit organism-like behavior, reinforcing the earlier expectation that the study of dissipative structures will provide insights into the nature of organisms and their origin. In this article, we summarize our study of organism-like behavior in electrically and chemically driven systems. The highly complex behavior of these systems shows the time evolution to states of higher entropy production. Using these systems as an example, we present some concepts that give us an understanding of biological organisms and their evolution. Full article
(This article belongs to the Special Issue Evolution and Thermodynamics)
Show Figures

Figure 1

17 pages, 354 KiB  
Article
Monitoring Parameter Change for Time Series Models of Counts Based on Minimum Density Power Divergence Estimator
by Sangyeol Lee and Dongwon Kim
Entropy 2020, 22(11), 1304; https://doi.org/10.3390/e22111304 - 16 Nov 2020
Cited by 6 | Viewed by 1909
Abstract
In this study, we consider an online monitoring procedure to detect a parameter change for integer-valued generalized autoregressive heteroscedastic (INGARCH) models whose conditional density of present observations over past information follows one parameter exponential family distributions. For this purpose, we use the cumulative [...] Read more.
In this study, we consider an online monitoring procedure to detect a parameter change for integer-valued generalized autoregressive heteroscedastic (INGARCH) models whose conditional density of present observations over past information follows one parameter exponential family distributions. For this purpose, we use the cumulative sum (CUSUM) of score functions deduced from the objective functions, constructed for the minimum power divergence estimator (MDPDE) that includes the maximum likelihood estimator (MLE), to diminish the influence of outliers. It is well-known that compared to the MLE, the MDPDE is robust against outliers with little loss of efficiency. This robustness property is properly inherited by the proposed monitoring procedure. A simulation study and real data analysis are conducted to affirm the validity of our method. Full article
Show Figures

Figure 1

20 pages, 1704 KiB  
Article
Applying Text Analytics for Studying Research Trends in Dependability
by Miriam Louise Carnot, Jorge Bernardino, Nuno Laranjeiro and Hugo Gonçalo Oliveira
Entropy 2020, 22(11), 1303; https://doi.org/10.3390/e22111303 - 16 Nov 2020
Cited by 12 | Viewed by 2687
Abstract
The dependability of systems and networks has been the target of research for many years now. In the 1970s, what is now known as the top conference on dependability—The IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)—emerged gathering international researchers and sparking [...] Read more.
The dependability of systems and networks has been the target of research for many years now. In the 1970s, what is now known as the top conference on dependability—The IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)—emerged gathering international researchers and sparking the interest of the scientific community. Although it started in niche systems, nowadays dependability is viewed as highly important in most computer systems. The goal of this work is to analyze the research published in the proceedings of well-established dependability conferences (i.e., DSN, International Symposium on Software Reliability Engineering (ISSRE), International Symposium on Reliable Distributed Systems (SRDS), European Dependable Computing Conference (EDCC), Latin-American Symposium on Dependable Computing (LADC), Pacific Rim International Symposium on Dependable Computing (PRDC)), while using Natural Language Processing (NLP) and namely the Latent Dirichlet Allocation (LDA) algorithm to identify active, collapsing, ephemeral, and new lines of research in the dependability field. Results show a strong emphasis on terms, like ‘security’, despite the general focus of the conferences in dependability and new trends that are related with ’machine learning’ and ‘blockchain’. We used the PRDC conference as a use case, which showed similarity with the overall set of conferences, although we also found specific terms, like ‘cyber-physical’, being popular at PRDC and not in the overall dataset. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop