Next Issue
Volume 17, December
Previous Issue
Volume 17, October
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 17, Issue 11 (November 2015) – 31 articles , Pages 7298-7847

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
2639 KiB  
Article
Theoretical Search for RNA Folding Nuclei
by Leonid B. Pereyaslavets and Oxana V. Galzitskaya
Entropy 2015, 17(11), 7827-7847; https://doi.org/10.3390/e17117827 - 23 Nov 2015
Cited by 1 | Viewed by 4409
Abstract
The functions of RNA molecules are defined by their spatial structure, whose folding is regulated by numerous factors making RNA very similar to proteins. Prediction of RNA folding nuclei gives the possibility to take a fresh look at the problems of the multiple [...] Read more.
The functions of RNA molecules are defined by their spatial structure, whose folding is regulated by numerous factors making RNA very similar to proteins. Prediction of RNA folding nuclei gives the possibility to take a fresh look at the problems of the multiple folding pathways of RNA molecules and RNA stability. The algorithm previously developed for prediction of protein folding nuclei has been successfully applied to ~150 various RNA structures: hairpins, tRNAs, structures with pseudoknots, and the large structured P4-P6 domain of the Tetrahymena group I intron RNA. The calculated Φ-values for tRNA structures agree with the experimental data obtained earlier. According to the experiment the nucleotides of the D and T hairpin loops are the last to be involved in the tRNA tertiary structure. Such agreement allowed us to do a prediction for an example of large structured RNA, the P4-P6 RNA domain. One of the advantages of our method is that it allows us to make predictions about the folding nucleus for nontrivial RNA motifs: pseudoknots and tRNA. Full article
(This article belongs to the Special Issue Entropy and RNA Structure, Folding and Mechanics)
Show Figures

Figure 1

1405 KiB  
Article
Thermodynamics Analysis of Variable Viscosity Hydromagnetic Couette Flow in a Rotating System with Hall Effects
by Oluwole D. Makinde, Adetayo S. Eegunjobi and M. Samuel Tshehla
Entropy 2015, 17(11), 7811-7826; https://doi.org/10.3390/e17117811 - 20 Nov 2015
Cited by 21 | Viewed by 4579
Abstract
In this paper, we employed both first and second laws of thermodynamics to analyze the flow and thermal decomposition in a variable viscosity Couette flow of a conducting fluid in a rotating system under the combined influence of magnetic field and Hall current. [...] Read more.
In this paper, we employed both first and second laws of thermodynamics to analyze the flow and thermal decomposition in a variable viscosity Couette flow of a conducting fluid in a rotating system under the combined influence of magnetic field and Hall current. The non-linear governing differential equations are obtained and solved numerically using shooting method coupled with fourth order Runge–Kutta–Fehlberg integration technique. Numerical results obtained for velocities and temperature profiles are utilized to determine the entropy generation rate, skin fictions, Nusselt number and the Bejan number. By plotting the graphs of various values of thermophysical parameters, the features of the flow characteristics are analyzed in detail. It is found that fluid rotation increases the dominant effect of heat transfer irreversibility at the upper moving plate region while the entropy production is more at the lower fixed plate region. Full article
Show Figures

Graphical abstract

397 KiB  
Article
Word-Length Correlations and Memory in Large Texts: A Visibility Network Analysis
by Lev Guzmán-Vargas, Bibiana Obregón-Quintana, Daniel Aguilar-Velázquez, Ricardo Hernández-Pérez and Larry S. Liebovitch
Entropy 2015, 17(11), 7798-7810; https://doi.org/10.3390/e17117798 - 20 Nov 2015
Cited by 11 | Viewed by 6539
Abstract
We study the correlation properties of word lengths in large texts from 30 ebooks in the English language from the Gutenberg Project (www.gutenberg.org) using the natural visibility graph method (NVG). NVG converts a time series into a graph and then analyzes its graph [...] Read more.
We study the correlation properties of word lengths in large texts from 30 ebooks in the English language from the Gutenberg Project (www.gutenberg.org) using the natural visibility graph method (NVG). NVG converts a time series into a graph and then analyzes its graph properties. First, the original sequence of words is transformed into a sequence of values containing the length of each word, and then, it is integrated. Next, we apply the NVG to the integrated word-length series and construct the network. We show that the degree distribution of that network follows a power law, P ( k ) ∼ k - γ , with two regimes, which are characterized by the exponents γ s ≈ 1 . 7 (at short degree scales) and γ l ≈ 1 . 3 (at large degree scales). This suggests that word lengths are much more strongly correlated at large distances between words than at short distances between words. That finding is also supported by the detrended fluctuation analysis (DFA) and recurrence time distribution. These results provide new information about the universal characteristics of the structure of written texts beyond that given by word frequencies. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

747 KiB  
Article
The Second Law Today: Using Maximum-Minimum Entropy Generation
by Umberto Lucia and Giuseppe Grazzini
Entropy 2015, 17(11), 7786-7797; https://doi.org/10.3390/e17117786 - 20 Nov 2015
Cited by 9 | Viewed by 5473
Abstract
There are a great number of thermodynamic schools, independent of each other, and without a powerful general approach, but with a split on non-equilibrium thermodynamics. In 1912, in relation to the stationary non-equilibrium states, Ehrenfest introduced the fundamental question on the existence of [...] Read more.
There are a great number of thermodynamic schools, independent of each other, and without a powerful general approach, but with a split on non-equilibrium thermodynamics. In 1912, in relation to the stationary non-equilibrium states, Ehrenfest introduced the fundamental question on the existence of a functional that achieves its extreme value for stable states, as entropy does for the stationary states in equilibrium thermodynamics. Today, the new branch frontiers of science and engineering, from power engineering to environmental sciences, from chaos to complex systems, from life sciences to nanosciences, etc. require a unified approach in order to optimize results and obtain a powerful approach to non-equilibrium thermodynamics and open systems. In this paper, a generalization of the Gouy–Stodola approach is suggested as a possible answer to the Ehrenfest question. Full article
2924 KiB  
Article
A Refined Multiscale Self-Entropy Approach for the Assessment of Cardiac Control Complexity: Application to Long QT Syndrome Type 1 Patients
by Vlasta Bari, Giulia Girardengo, Andrea Marchi, Beatrice De Maria, Paul A. Brink, Lia Crotti, Peter J. Schwartz and Alberto Porta
Entropy 2015, 17(11), 7768-7785; https://doi.org/10.3390/e17117768 - 19 Nov 2015
Cited by 4 | Viewed by 7296
Abstract
The study proposes the contemporaneous assessment of conditional entropy (CE) and self-entropy (sE), being the two terms of the Shannon entropy (ShE) decomposition, as a function of the time scale via refined multiscale CE (RMSCE) and sE (RMSsE) with the aim at gaining [...] Read more.
The study proposes the contemporaneous assessment of conditional entropy (CE) and self-entropy (sE), being the two terms of the Shannon entropy (ShE) decomposition, as a function of the time scale via refined multiscale CE (RMSCE) and sE (RMSsE) with the aim at gaining insight into cardiac control in long QT syndrome type 1 (LQT1) patients featuring the KCNQ1-A341V mutation. CE was estimated via the corrected CE (CCE) and sE as the difference between the ShE and CCE. RMSCE and RMSsE were computed over the beat-to-beat series of heart period (HP) and QT interval derived from 24-hour Holter electrocardiographic recordings during daytime (DAY) and nighttime (NIGHT). LQT1 patients were subdivided into asymptomatic and symptomatic mutation carriers (AMCs and SMCs) according to the severity of symptoms and contrasted with non-mutation carriers (NMCs). We found that RMSCE and RMSsE carry non-redundant information, separate experimental conditions (i.e., DAY and NIGHT) within a given group and distinguish groups (i.e., NMC, AMC and SMC) assigned the experimental condition. Findings stress the importance of the joint evaluation of RMSCE and RMSsE over HP and QT variabilities to typify the state of the autonomic function and contribute to clarify differences between AMCs and SMCs. Full article
(This article belongs to the Special Issue Multiscale Entropy and Its Applications in Medicine and Biology)
Show Figures

Graphical abstract

288 KiB  
Article
Disentangling the Quantum World
by Huw Price and Ken Wharton
Entropy 2015, 17(11), 7752-7767; https://doi.org/10.3390/e17117752 - 16 Nov 2015
Cited by 42 | Viewed by 9511
Abstract
Correlations related to quantum entanglement have convinced many physicists that there must be some at-a-distance connection between separated events, at the quantum level. In the late 1940s, however, O. Costa de Beauregard proposed that such correlations can be explained without action at a [...] Read more.
Correlations related to quantum entanglement have convinced many physicists that there must be some at-a-distance connection between separated events, at the quantum level. In the late 1940s, however, O. Costa de Beauregard proposed that such correlations can be explained without action at a distance, so long as the influence takes a zigzag path, via the intersecting past lightcones of the events in question. Costa de Beauregard’s proposal is related to what has come to be called the retrocausal loophole in Bell’s Theorem, but—like that loophole—it receives little attention, and remains poorly understood. Here we propose a new way to explain and motivate the idea. We exploit some simple symmetries to show how Costa de Beauregard’s zigzag needs to work, to explain the correlations at the core of Bell’s Theorem. As a bonus, the explanation shows how entanglement might be a much simpler matter than the orthodox view assumes—not a puzzling feature of quantum reality itself, but an entirely unpuzzling feature of our knowledge of reality, once zigzags are in play. Full article
Show Figures

Figure 1

248 KiB  
Article
Payoffs and Coherence of a Quantum Two-Player Game in a Thermal Environment
by Jerzy Dajka, Marcin Łobejko and Jan Sładkowski
Entropy 2015, 17(11), 7736-7751; https://doi.org/10.3390/e17117736 - 13 Nov 2015
Cited by 5 | Viewed by 4050
Abstract
A two-player quantum game is considered in the presence of a thermal decoherence modeled in terms of a rigorous Davies approach. It is shown how the energy dissipation and pure decoherence affect the payoffs of the players of the (quantum version) of prisoner [...] Read more.
A two-player quantum game is considered in the presence of a thermal decoherence modeled in terms of a rigorous Davies approach. It is shown how the energy dissipation and pure decoherence affect the payoffs of the players of the (quantum version) of prisoner dilemma. The impact of the thermal environment on a coherence of game, as a quantum system, is also presented. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Show Figures

Figure 1

2578 KiB  
Article
From Lattice Boltzmann Method to Lattice Boltzmann Flux Solver
by Yan Wang, Liming Yang and Chang Shu
Entropy 2015, 17(11), 7713-7735; https://doi.org/10.3390/e17117713 - 13 Nov 2015
Cited by 44 | Viewed by 7550
Abstract
Based on the lattice Boltzmann method (LBM), the lattice Boltzmann flux solver (LBFS), which combines the advantages of conventional Navier–Stokes solvers and lattice Boltzmann solvers, was proposed recently. Specifically, LBFS applies the finite volume method to solve the macroscopic governing equations which provide [...] Read more.
Based on the lattice Boltzmann method (LBM), the lattice Boltzmann flux solver (LBFS), which combines the advantages of conventional Navier–Stokes solvers and lattice Boltzmann solvers, was proposed recently. Specifically, LBFS applies the finite volume method to solve the macroscopic governing equations which provide solutions for macroscopic flow variables at cell centers. In the meantime, numerical fluxes at each cell interface are evaluated by local reconstruction of LBM solution. In other words, in LBFS, LBM is only locally applied at the cell interface for one streaming step. This is quite different from the conventional LBM, which is globally applied in the whole flow domain. This paper shows three different versions of LBFS respectively for isothermal, thermal and compressible flows and their relationships with the standard LBM. In particular, the performance of isothermal LBFS in terms of accuracy, efficiency and stability is investigated by comparing it with the standard LBM. The thermal LBFS is simplified by using the D2Q4 lattice velocity model and its performance is examined by its application to simulate natural convection with high Rayleigh numbers. It is demonstrated that the compressible LBFS can be effectively used to simulate both inviscid and viscous flows by incorporating non-equilibrium effects into the process for inviscid flux reconstruction. Several numerical examples, including lid-driven cavity flow, natural convection in a square cavity at Rayleigh numbers of 107 and 108 and transonic flow around a staggered-biplane configuration, are tested on structured or unstructured grids to examine the performance of three LBFS versions. Good agreements have been achieved with the published data, which validates the capability of LBFS in simulating a variety of flow problems. Full article
(This article belongs to the Special Issue Non-Linear Lattice)
Show Figures

Figure 1

1404 KiB  
Article
A Novel Method for PD Feature Extraction of Power Cable with Renyi Entropy
by Jikai Chen, Yanhui Dou, Zhenhao Wang and Guoqing Li
Entropy 2015, 17(11), 7698-7712; https://doi.org/10.3390/e17117698 - 13 Nov 2015
Cited by 11 | Viewed by 5361
Abstract
Partial discharge (PD) detection can effectively achieve the status maintenance of XLPE (Cross Linked Polyethylene) cable, so it is the direction of the development of equipment maintenance in power systems. At present, a main method of PD detection is the broadband electromagnetic coupling [...] Read more.
Partial discharge (PD) detection can effectively achieve the status maintenance of XLPE (Cross Linked Polyethylene) cable, so it is the direction of the development of equipment maintenance in power systems. At present, a main method of PD detection is the broadband electromagnetic coupling with a high-frequency current transformer (HFCT). Due to the strong electromagnetic interference (EMI) generated among the mass amount of cables in a tunnel and the impedance mismatching of HFCT and the data acquisition equipment, the features of the pulse current generated by PD are often submerged in the background noise. The conventional method for the stationary signal analysis cannot analyze the PD signal, which is transient and non-stationary. Although the algorithm of Shannon wavelet singular entropy (SWSE) can be used to analyze the PD signal at some level, its precision and anti-interference capability of PD feature extraction are still insufficient. For the above problem, a novel method named Renyi wavelet packet singular entropy (RWPSE) is proposed and applied to the PD feature extraction on power cables. Taking a three-level system as an example, we analyze the statistical properties of Renyi entropy and the intrinsic correlation with Shannon entropy under different values of α . At the same time, discrete wavelet packet transform (DWPT) is taken instead of discrete wavelet transform (DWT), and Renyi entropy is combined to construct the RWPSE algorithm. Taking the grounding current signal from the shielding layer of XLPE cable as the research object, which includes the current pulse feature of PD, the effectiveness of the novel method is tested. The theoretical analysis and experimental results show that compared to SWSE, RWPSE can not only improve the feature extraction accuracy for PD, but also can suppress EMI effectively. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory I)
Show Figures

Figure 1

376 KiB  
Article
Using Expectation Maximization and Resource Overlap Techniques to Classify Species According to Their Niche Similarities in Mutualistic Networks
by Hugo Fort and Muhittin Mungan
Entropy 2015, 17(11), 7680-7697; https://doi.org/10.3390/e17117680 - 12 Nov 2015
Cited by 3 | Viewed by 4854
Abstract
Mutualistic networks in nature are widespread and play a key role in generating the diversity of life on Earth. They constitute an interdisciplinary field where physicists, biologists and computer scientists work together. Plant-pollinator mutualisms in particular form complex networks of interdependence between often [...] Read more.
Mutualistic networks in nature are widespread and play a key role in generating the diversity of life on Earth. They constitute an interdisciplinary field where physicists, biologists and computer scientists work together. Plant-pollinator mutualisms in particular form complex networks of interdependence between often hundreds of species. Understanding the architecture of these networks is of paramount importance for assessing the robustness of the corresponding communities to global change and management strategies. Advances in this problem are currently limited mainly due to the lack of methodological tools to deal with the intrinsic complexity of mutualisms, as well as the scarcity and incompleteness of available empirical data. One way to uncover the structure underlying complex networks is to employ information theoretical statistical inference methods, such as the expectation maximization (EM) algorithm. In particular, such an approach can be used to cluster the nodes of a network based on the similarity of their node neighborhoods. Here, we show how to connect network theory with the classical ecological niche theory for mutualistic plant-pollinator webs by using the EM algorithm. We apply EM to classify the nodes of an extensive collection of mutualistic plant-pollinator networks according to their connection similarity. We find that EM recovers largely the same clustering of the species as an alternative recently proposed method based on resource overlap, where one considers each party as a consuming resource for the other party (plants providing food to animals, while animals assist the reproduction of plants). Furthermore, using the EM algorithm, we can obtain a sequence of successfully-refined classifications that enables us to identify the fine-structure of the ecological network and understand better the niche distribution both for plants and animals. This is an example of how information theoretical methods help to systematize and unify work in ecology. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Show Figures

Graphical abstract

1814 KiB  
Article
Neighborhood Approximations for Non-Linear Voter Models
by Frank Schweitzer and Laxmidhar Behera
Entropy 2015, 17(11), 7658-7679; https://doi.org/10.3390/e17117658 - 10 Nov 2015
Cited by 7 | Viewed by 4254
Abstract
Non-linear voter models assume that the opinion of an agent depends on the opinions of its neighbors in a non-linear manner. This allows for voting rules different from majority voting. While the linear voter model is known to reach consensus, non-linear voter models [...] Read more.
Non-linear voter models assume that the opinion of an agent depends on the opinions of its neighbors in a non-linear manner. This allows for voting rules different from majority voting. While the linear voter model is known to reach consensus, non-linear voter models can result in the coexistence of opposite opinions. Our aim is to derive approximations to correctly predict the time dependent dynamics, or at least the asymptotic outcome, of such local interactions. Emphasis is on a probabilistic approach to decompose the opinion distribution in a second-order neighborhood into lower-order probability distributions. This is compared with an analytic pair approximation for the expected value of the global fraction of opinions and a mean-field approximation. Our reference case is averaged stochastic simulations of a one-dimensional cellular automaton. We find that the probabilistic second-order approach captures the dynamics of the reference case very well for different non-linearities, i.e., for both majority and minority voting rules, which only partly holds for the first-order pair approximation and not at all for the mean-field approximation. We further discuss the interesting phenomenon of a correlated coexistence, characterized by the formation of large domains of opinions that dominate for some time, but slowly change. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

2532 KiB  
Article
Quantum Game Beats Classical Odds—Thermodynamics Implications
by George Levy
Entropy 2015, 17(11), 7645-7657; https://doi.org/10.3390/e17117645 - 09 Nov 2015
Cited by 5 | Viewed by 5443
Abstract
A quantum game is described making use of coins embodied as entangled Fermions in a potential energy well. It is shown that the odds are affected by the Pauli Exclusion Principle. They depend on the elevation in the energy well where the coins [...] Read more.
A quantum game is described making use of coins embodied as entangled Fermions in a potential energy well. It is shown that the odds are affected by the Pauli Exclusion Principle. They depend on the elevation in the energy well where the coins are selected, ranging from being a certainty of winning at the bottom of the well to being near classical at the top. These odds differ markedly from those in a classical game in which they are independent of elevation. The thermodynamics counterpart of the quantum game is discussed. It is shown that the temperature of a Maxwellian gas column in a potential energy gradient is independent of elevation. However, the temperature of a Fermion gas is shown to drop with elevation. The game and the gas column utilize the same components. When Fermions are used, a shifting of odds is produced in the game and a shifting of kinetic energy is produced in the thermodynamic experiment, leading to a spontaneous temperature gradient. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Graphical abstract

1800 KiB  
Article
A Memristor-Based Complex Lorenz System and Its Modified Projective Synchronization
by Shibing Wang, Xingyuan Wang and Yufei Zhou
Entropy 2015, 17(11), 7628-7644; https://doi.org/10.3390/e17117628 - 05 Nov 2015
Cited by 27 | Viewed by 7636
Abstract
The aim of this paper is to introduce and investigate a novel complex Lorenz system with a flux-controlled memristor, and to realize its synchronization. The system has an infinite number of stable and unstable equilibrium points, and can generate abundant dynamical behaviors with [...] Read more.
The aim of this paper is to introduce and investigate a novel complex Lorenz system with a flux-controlled memristor, and to realize its synchronization. The system has an infinite number of stable and unstable equilibrium points, and can generate abundant dynamical behaviors with different parameters and initial conditions, such as limit cycle, torus, chaos, transient phenomena, etc., which are explored by means of time-domain waveforms, phase portraits, bifurcation diagrams, and Lyapunov exponents. Furthermore, an active controller is designed to achieve modified projective synchronization (MPS) of this system based on Lyapunov stability theory. The corresponding numerical simulations agree well with the theoretical analysis, and demonstrate that the response system is asymptotically synchronized with the drive system within a short time. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Show Figures

Figure 1

1185 KiB  
Article
Multi-Scale Entropy Analysis of Body Sway for Investigating Balance Ability During Exergame Play Under Different Parameter Settings
by Chia-Hsuan Lee and Tien-Lung Sun
Entropy 2015, 17(11), 7608-7627; https://doi.org/10.3390/e17117608 - 04 Nov 2015
Cited by 3 | Viewed by 5827
Abstract
The goal of this study was to investigate the parameters affecting exergame performance using multi-scale entropy analysis, with the aim of informing the design of exergames for personalized balance training. Test subjects’ center of pressure (COP) displacement data were recorded during exergame play [...] Read more.
The goal of this study was to investigate the parameters affecting exergame performance using multi-scale entropy analysis, with the aim of informing the design of exergames for personalized balance training. Test subjects’ center of pressure (COP) displacement data were recorded during exergame play to examine their balance ability at varying difficulty levels of a balance-based exergame; the results of a multi-scale entropy-based analysis were then compared to traditional COP indicators. For games involving static posture frames, variation in posture frame travel time was found to significantly affect the complexity of both the anterior-posterior (MSE-AP) and medio-lateral (MSE-ML) components of balancing movements. However, in games involving dynamic posture frames, only MSE-AP was found to be sensitive to the variation of parameters, namely foot-lifting speed. Findings were comparable to the COP data published by Sun et al., indicating that the use of complexity data is a feasible means of distinguishing between different parameter sets and of understanding how human design considerations must be taken into account in exergame development. Not only can this method be used as another assessment index in the future, it can also be used in the optimization of parameters within the virtual environments of exergames. Full article
(This article belongs to the Special Issue Multiscale Entropy and Its Applications in Medicine and Biology)
Show Figures

Figure 1

1432 KiB  
Article
Single to Two Cluster State Transition of Primary Motor Cortex 4-posterior (MI-4p) Activities in Humans
by Kazunori Nakada, Kiyotaka Suzuki and Tsutomu Nakada
Entropy 2015, 17(11), 7596-7607; https://doi.org/10.3390/e17117596 - 03 Nov 2015
Cited by 1 | Viewed by 4402
Abstract
The human primary motor cortex has dual representation of the digits, namely, area 4 anterior (MI-4a) and area 4 posterior (MI-4p). We have previously demonstrated that activation of these two functional subunits can be identified independently by functional magnetic resonance imaging (fMRI) using [...] Read more.
The human primary motor cortex has dual representation of the digits, namely, area 4 anterior (MI-4a) and area 4 posterior (MI-4p). We have previously demonstrated that activation of these two functional subunits can be identified independently by functional magnetic resonance imaging (fMRI) using independent component-cross correlation-sequential epoch (ICS) analysis. Subsequent studies in patients with hemiparesis due to subcortical lesions and monoparesis due to peripheral nerve injury demonstrated that MI-4p represents the initiation area of activation, whereas MI-4a is the secondarily activated motor cortex requiring a “long-loop” feedback input from secondary motor systems, likely the cerebellum. A dynamic model of hand motion based on the limit cycle oscillator predicts that the specific pattern of entrainment of neural firing may occur by applying appropriate periodic stimuli. Under normal conditions, such entrainment introduces a single phase-cluster. Under pathological conditions where entrainment stimuli have insufficient strength, the phase cluster splits into two clusters. Observable physiological phenomena of this shift from single cluster to two clusters are: doubling of firing rate of output neurons; or decay in group firing density of the system due to dampening of odd harmonics components. While the former is not testable in humans, the latter can be tested by appropriately designed fMRI experiments, the quantitative index of which is believed to reflect group behavior of neurons functionally localized, e.g., firing density in the dynamic theory. Accordingly, we performed dynamic analysis of MI-4p activation in normal volunteers and paretic patients. The results clearly indicated that MI-4p exhibits a transition from a single to a two phase-cluster state which coincided with loss of MI-4a activation. The study demonstrated that motor dysfunction (hemiparesis) in patients with a subcortical infarct is not simply due to afferent fiber disruption. Maintaining proper afferent signals from MI-4p requires proper functionality of MI-4a and, hence, appropriate feedback signals from the secondary motor system. Full article
(This article belongs to the Special Issue Entropy in Human Brain Networks)
Show Figures

Figure 1

206 KiB  
Article
Choice Overload and Height Ranking of Menus in Partially-Ordered Sets
by Marcello Basili and Stefano Vannucci
Entropy 2015, 17(11), 7584-7595; https://doi.org/10.3390/e17117584 - 30 Oct 2015
Cited by 3 | Viewed by 4462
Abstract
When agents face incomplete information and their knowledge about the objects of choice is vague and imprecise, they tend to consider fewer choices and to process a smaller portion of the available information regarding their choices. This phenomenon is well-known as choice overload [...] Read more.
When agents face incomplete information and their knowledge about the objects of choice is vague and imprecise, they tend to consider fewer choices and to process a smaller portion of the available information regarding their choices. This phenomenon is well-known as choice overload and is strictly related to the existence of a considerable amount of option-pairs that are not easily comparable. Thus, we use a finite partially-ordered set (poset) to model the subset of easily-comparable pairs within a set of options/items. The height ranking, a new ranking rule for menus, namely subposets of a finite poset, is then introduced and characterized. The height ranking rule ranks subsets of options in terms of the size of the longest chain that they include and is therefore meant to assess menus of available options in terms of the maximum number of distinct and easily-comparable alternative options that they offer. Full article
(This article belongs to the Special Issue Entropy, Utility, and Logical Reasoning)
258 KiB  
Article
Minimum Dissipation Principle in Nonlinear Transport
by Giorgio Sonnino, Jarah Evslin and Alberto Sonnino
Entropy 2015, 17(11), 7567-7583; https://doi.org/10.3390/e17117567 - 30 Oct 2015
Cited by 4 | Viewed by 4555
Abstract
We extend Onsager’s minimum dissipation principle to stationary states that are only subject to local equilibrium constraints, even when the transport coefficients depend on the thermodynamic forces. Crucial to this generalization is a decomposition of the thermodynamic forces into those that are held [...] Read more.
We extend Onsager’s minimum dissipation principle to stationary states that are only subject to local equilibrium constraints, even when the transport coefficients depend on the thermodynamic forces. Crucial to this generalization is a decomposition of the thermodynamic forces into those that are held fixed by the boundary conditions and the subspace that is orthogonal with respect to the metric defined by the transport coefficients. We are then able to apply Onsager and Machlup’s proof to the second set of forces. As an example, we consider two-dimensional nonlinear diffusion coupled to two reservoirs at different temperatures. Our extension differs from that of Bertini et al. in that we assume microscopic irreversibility, and we allow a nonlinear dependence of the fluxes on the forces. Full article
Show Figures

Figure 1

1426 KiB  
Article
Entropy Generation of Desalination Powered by Variable Temperature Waste Heat
by David M. Warsinger, Karan H. Mistry, Kishor G. Nayar, Hyung Won Chung and John H. Lienhard V
Entropy 2015, 17(11), 7530-7566; https://doi.org/10.3390/e17117530 - 30 Oct 2015
Cited by 77 | Viewed by 16515
Abstract
Powering desalination by waste heat is often proposed to mitigate energy consumption and environmental impact; however, thorough technology comparisons are lacking in the literature. This work numerically models the efficiency of six representative desalination technologies powered by waste heat at 50, 70, 90, [...] Read more.
Powering desalination by waste heat is often proposed to mitigate energy consumption and environmental impact; however, thorough technology comparisons are lacking in the literature. This work numerically models the efficiency of six representative desalination technologies powered by waste heat at 50, 70, 90, and 120 °C, where applicable. Entropy generation and Second Law efficiency analysis are applied for the systems and their components. The technologies considered are thermal desalination by multistage flash (MSF), multiple effect distillation (MED), multistage vacuum membrane distillation (MSVMD), humidification-dehumidification (HDH), and organic Rankine cycles (ORCs) paired with mechanical technologies of reverse osmosis (RO) and mechanical vapor compression (MVC). The most efficient technology was RO, followed by MED. Performances among MSF, MSVMD, and MVC were similar but the relative performance varied with waste heat temperature or system size. Entropy generation in thermal technologies increases at lower waste heat temperatures largely in the feed or brine portions of the various heat exchangers used. This occurs largely because lower temperatures reduce recovery, increasing the relative flow rates of feed and brine. However, HDH (without extractions) had the reverse trend, only being competitive at lower temperatures. For the mechanical technologies, the energy efficiency only varies with temperature because of the significant losses from the ORC. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

233 KiB  
Article
A Truncation Scheme for the BBGKY2 Equation
by Gregor Chliamovitch, Orestis Malaspinas and Bastien Chopard
Entropy 2015, 17(11), 7522-7529; https://doi.org/10.3390/e17117522 - 30 Oct 2015
Cited by 4 | Viewed by 4460
Abstract
In recent years, the maximum entropy principle has been applied to a wide range of different fields, often successfully. While these works are usually focussed on cross-disciplinary applications, the point of this letter is instead to reconsider a fundamental point of kinetic theory. [...] Read more.
In recent years, the maximum entropy principle has been applied to a wide range of different fields, often successfully. While these works are usually focussed on cross-disciplinary applications, the point of this letter is instead to reconsider a fundamental point of kinetic theory. Namely, we shall re-examine the Stosszahlansatz leading to the irreversible Boltzmann equation at the light of the MaxEnt principle. We assert that this way of thinking allows to move one step further than the factorization hypothesis and provides a coherent—though implicit—closure scheme for the two-particle distribution function. Such higher-order dependences are believed to open the way to a deeper understanding of fluctuating phenomena. Full article
(This article belongs to the Special Issue Non-Linear Lattice)
815 KiB  
Article
Analytical Solutions of the Black–Scholes Pricing Model for European Option Valuation via a Projected Differential Transformation Method
by Sunday O. Edeki, Olabisi O. Ugbebor and Enahoro A. Owoloko
Entropy 2015, 17(11), 7510-7521; https://doi.org/10.3390/e17117510 - 30 Oct 2015
Cited by 49 | Viewed by 6536
Abstract
In this paper, a proposed computational method referred to as Projected Differential Transformation Method (PDTM) resulting from the modification of the classical Differential Transformation Method (DTM) is applied, for the first time, to the Black–Scholes Equation for European Option Valuation. The results obtained [...] Read more.
In this paper, a proposed computational method referred to as Projected Differential Transformation Method (PDTM) resulting from the modification of the classical Differential Transformation Method (DTM) is applied, for the first time, to the Black–Scholes Equation for European Option Valuation. The results obtained converge faster to their associated exact solution form; these easily computed results represent the analytical values of the associated European call options, and the same algorithm can be followed for European put options. It is shown that PDTM is more efficient, reliable and better than the classical DTM and other semi-analytical methods since less computational work is involved. Hence, it is strongly recommended for both linear and nonlinear stochastic differential equations (SDEs) encountered in financial mathematics. Full article
(This article belongs to the Special Issue Dynamical Equations and Causal Structures from Observations)
Show Figures

Figure 1

831 KiB  
Article
Characterization of Complex Fractionated Atrial Electrograms by Sample Entropy: An International Multi-Center Study
by Eva Cirugeda–Roldán, Daniel Novak, Vaclav Kremen, David Cuesta–Frau, Matthias Keller, Armin Luik and Martina Srutova
Entropy 2015, 17(11), 7493-7509; https://doi.org/10.3390/e17117493 - 28 Oct 2015
Cited by 16 | Viewed by 6709
Abstract
Atrial fibrillation (AF) is the most commonly clinically-encountered arrhythmia. Catheter ablation of AF is mainly based on trigger elimination and modification of the AF substrate. Substrate mapping ablation of complex fractionated atrial electrograms (CFAEs) has emerged to be a promising technique. To improve [...] Read more.
Atrial fibrillation (AF) is the most commonly clinically-encountered arrhythmia. Catheter ablation of AF is mainly based on trigger elimination and modification of the AF substrate. Substrate mapping ablation of complex fractionated atrial electrograms (CFAEs) has emerged to be a promising technique. To improve substrate mapping based on CFAE analysis, automatic detection algorithms need to be developed in order to simplify and accelerate the ablation procedures. According to the latest studies, the level of fractionation has been shown to be promisingly well estimated from CFAE measured during radio frequency (RF) ablation of AF. The nature of CFAE is generally nonlinear and nonstationary, so the use of complexity measures is considered to be the appropriate technique for the analysis of AF records. This work proposes the use of sample entropy (SampEn), not only as a way to discern between non-fractionated and fractionated atrial electrograms (A-EGM), Entropy 2015, 17 7494 but also as a tool for characterizing the degree of A-EGM regularity, which is linked to changes in the AF substrate and to heart tissue damage. The use of SampEn combined with a blind parameter estimation optimization process enables the classification between CFAE and non-CFAE with statistical significance (p < 0:001), 0.89 area under the ROC, 86% specificity and 77% sensitivity over a mixed database of A-EGM combined from two independent CFAE signal databases, recorded during RF ablation of AF in two EU countries (542 signals in total). On the basis of the results obtained in this study, it can be suggested that the use of SampEn is suitable for real-time support during navigation of RF ablation of AF, as only 1.5 seconds of signal segments need to be analyzed. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Show Figures

Graphical abstract

4171 KiB  
Article
Information Theoretic Measures to Infer Feedback Dynamics in Coupled Logistic Networks
by Allison Goodwell and Praveen Kumar
Entropy 2015, 17(11), 7468-7492; https://doi.org/10.3390/e17117468 - 28 Oct 2015
Cited by 14 | Viewed by 5058
Abstract
A process network is a collection of interacting time series nodes, in which interactions can range from weak dependencies to complete synchronization. Between these extremes, nodes may respond to each other or external forcing at certain time scales and strengths. Identification of such [...] Read more.
A process network is a collection of interacting time series nodes, in which interactions can range from weak dependencies to complete synchronization. Between these extremes, nodes may respond to each other or external forcing at certain time scales and strengths. Identification of such dependencies from time series can reveal the complex behavior of the system as a whole. Since observed time series datasets are often limited in length, robust measures are needed to quantify strengths and time scales of interactions and their unique contributions to the whole system behavior. We generate coupled chaotic logistic networks with a range of connectivity structures, time scales, noise, and forcing mechanisms, and compute variance and lagged mutual information measures to evaluate how detected time dependencies reveal system behavior. When a target node is detected to receive information from multiple sources, we compute conditional mutual information and total shared information between each source node pair to identify unique or redundant sources. While variance measures capture synchronization trends, combinations of information measures provide further distinctions regarding drivers, redundancies, and time dependencies within the network. We find that imposed network connectivity often leads to induced feedback that is identified as redundant links, and cannot be distinguished from imposed causal linkages. We find that random or external driving nodes are more likely to provide unique information than mutually dependent nodes in a highly connected network. In process networks constructed from observed data, the methods presented can be used to infer connectivity, dominant interactions, and systemic behavioral shift. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

196 KiB  
Review
Estimating a Repeatable Statistical Law by Requiring Its Stability During Observation
by B. Roy Frieden
Entropy 2015, 17(11), 7453-7467; https://doi.org/10.3390/e17117453 - 28 Oct 2015
Viewed by 3897
Abstract
Consider a statistically-repeatable, shift-invariant system obeying an unknown probability law p(x) ≡ q2(x): Amplitude q(x) defines a source effect that is to be found. We show that q(x) may be found by considering [...] Read more.
Consider a statistically-repeatable, shift-invariant system obeying an unknown probability law p(x) ≡ q2(x): Amplitude q(x) defines a source effect that is to be found. We show that q(x) may be found by considering the flow of Fisher information J → I from source effect to observer that occurs during macroscopic observation of the system. Such an observation is irreversible and, hence, incurs a general loss I - J of the information. By requiring stability of the law q(x), as well, it is found to obey a principle I − J = min. of “extreme physical information” (EPI). Information I is the same functional of q(x) for any shift-invariant system, and J is a functional defining a physical source effect that must be known at least approximately. The minimum of EPI implies that I ≈ J or received information tends to well-approximate reality. Past applications of EPI to predicting laws of statistical physics, chemistry, biology, economics and social organization are briefly described. Full article
(This article belongs to the Special Issue Applications of Fisher Information in Sciences)
343 KiB  
Review
Distribution Function of the Atoms of Spacetime and the Nature of Gravity
by Thanu Padmanabhan
Entropy 2015, 17(11), 7420-7452; https://doi.org/10.3390/e17117420 - 28 Oct 2015
Cited by 37 | Viewed by 4234
Abstract
The fact that the equations of motion for matter remain invariant when a constant is added to the Lagrangian suggests postulating that the field equations of gravity should also respect this symmetry. This principle implies that: (1) the metric cannot be varied in [...] Read more.
The fact that the equations of motion for matter remain invariant when a constant is added to the Lagrangian suggests postulating that the field equations of gravity should also respect this symmetry. This principle implies that: (1) the metric cannot be varied in any extremum principle to obtain the field equations; and (2) the stress-tensor of matter should appear in the variational principle through the combination Tabnanb where na is an auxiliary null vector field, which could be varied to get the field equations. This procedure uniquely selects the Lanczos–Lovelock models of gravity in D-dimensions and Einstein’s theory in D = 4. Identifying na with the normals to the null surfaces in the spacetime in the macroscopic limit leads to a thermodynamic interpretation for gravity. Several geometrical variables and the equation describing the spacetime evolution acquire a thermodynamic interpretation. Extending these ideas one level deeper, we can obtain this variational principle from a distribution function for the “atoms of spacetime”, which counts the number of microscopic degrees of freedom of the geometry. This is based on the curious fact that the renormalized spacetime endows each event with zero volume, but finite area! Full article
(This article belongs to the Special Issue Entropy in Quantum Gravity and Quantum Cosmology)
1468 KiB  
Article
Extension of the Improved Bounce-Back Scheme for Electrokinetic Flow in the Lattice Boltzmann Method
by Qing Chen, Hongping Zhou, Xuesong Jiang, Linyun Xu, Qing Li and Yu Ru
Entropy 2015, 17(11), 7406-7419; https://doi.org/10.3390/e17117406 - 28 Oct 2015
Cited by 7 | Viewed by 5467
Abstract
In this paper, an improved bounce-back boundary treatment for fluid systems in the lattice Boltzmann method [Yin, X.; Zhang J. J. Comput. Phys. 2012, 231, 4295–4303] is extended to handle the electrokinetic flows with complex boundary shapes and conditions. Several numerical [...] Read more.
In this paper, an improved bounce-back boundary treatment for fluid systems in the lattice Boltzmann method [Yin, X.; Zhang J. J. Comput. Phys. 2012, 231, 4295–4303] is extended to handle the electrokinetic flows with complex boundary shapes and conditions. Several numerical simulations are performed to validate the electric boundary treatment. Simulations are presented to demonstrate the accuracy and capability of this method in dealing with complex surface potential situations, and simulated results are compared with analytical predictions with excellent agreement. This method could be useful for electrokinetic simulations with complex boundaries, and can also be readily extended to other phenomena and processes. Full article
(This article belongs to the Special Issue Non-Linear Lattice)
Show Figures

Figure 1

1008 KiB  
Article
Performance of a Composite Thermoelectric Generator with Different Arrangements of SiGe, BiTe and PbTe under Different Configurations
by Alexander Vargas-Almeida, Miguel Angel Olivares-Robles and Federico Méndez Lavielle
Entropy 2015, 17(11), 7387-7405; https://doi.org/10.3390/e17117387 - 28 Oct 2015
Cited by 2 | Viewed by 4688
Abstract
In this study, we analyze the role of the thermoelectric (TE) properties, namely Seebeck coefficient α, thermal conductivity κ and electrical resistivity ρ, of three different materials in a composite thermoelectric generator (CTEG) under different configurations. The CTEG is composed of three thermoelectric [...] Read more.
In this study, we analyze the role of the thermoelectric (TE) properties, namely Seebeck coefficient α, thermal conductivity κ and electrical resistivity ρ, of three different materials in a composite thermoelectric generator (CTEG) under different configurations. The CTEG is composed of three thermoelectric modules (TEMs): (1) two TEMs thermally and electrically connected in series (SC); (2) two branches of TEMs thermally and electrically connected in parallel (PSC); and (3) three TEMs thermally and electrically connected in parallel (TEP). In general, each of the TEMs have different thermoelectric parameters, namely a Seebeck coefficient α, a thermal conductance K and an electrical resistance R. Following the framework proposed recently, we show the effect of: (1) the configuration; and (2) the arrangements of TE materials on the corresponding equivalent figure of merit Zeq and consequently on the maximum power Pmax and efficiency η of the CTEG. Firstly, we consider that the whole system is formed of the same thermoelectric material (α1,K1,R1 = α2,K2,R2 = α3,K3,R3) and, secondly, that the whole system is constituted by only two different thermoelectric materials Entropy 2015, 17 7388 (αi,Ki,Ri ≠ αj ,Kj ,Rj 6= αl,Kl,Rl, where i, j, l can be 1, 2 or 3). In this work, we propose arrangements of TEMs, which clearly have the advantage of a higher thermoelectric figure of merit value compared to a conventional thermoelectric module. A corollary about the Zeq-max for CTEG is obtained as a result of these considerations. We suggest an optimum configuration. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Graphical abstract

163 KiB  
Article
The Measurement Problem from the Perspective of an Information-Theoretic Interpretation of Quantum Mechanics
by Jeffrey Bub
Entropy 2015, 17(11), 7374-7386; https://doi.org/10.3390/e17117374 - 28 Oct 2015
Cited by 5 | Viewed by 6025
Abstract
The aim of this paper is to consider the consequences of an information-theoretic interpretation of quantum mechanics for the measurement problem. The motivating idea of the interpretation is that the relation between quantum mechanics and the structure of information is analogous to the [...] Read more.
The aim of this paper is to consider the consequences of an information-theoretic interpretation of quantum mechanics for the measurement problem. The motivating idea of the interpretation is that the relation between quantum mechanics and the structure of information is analogous to the relation between special relativity and the structure of space-time. Insofar as quantum mechanics deals with a class of probabilistic correlations that includes correlations structurally different from classical correlations, the theory is about the structure of information: the possibilities for representing, manipulating, and communicating information in a genuinely indeterministic quantum world in which measurement outcomes are intrinsically random are different than we thought. Part of the measurement problem is deflated as a pseudo-problem on this view, and the theory has the resources to deal with the remaining part, given certain idealizations in the treatment of macrosystems. Full article
(This article belongs to the Special Issue Information: Meanings and Interpretations)
820 KiB  
Article
Quantum Information as a Non-Kolmogorovian Generalization of Shannon’s Theory
by Federico Holik, Gustavo M. Bosyk and Guido Bellomo
Entropy 2015, 17(11), 7349-7373; https://doi.org/10.3390/e17117349 - 28 Oct 2015
Cited by 19 | Viewed by 5036
Abstract
In this article, we discuss the formal structure of a generalized information theory based on the extension of the probability calculus of Kolmogorov to a (possibly) non-commutative setting. By studying this framework, we argue that quantum information can be considered as a particular [...] Read more.
In this article, we discuss the formal structure of a generalized information theory based on the extension of the probability calculus of Kolmogorov to a (possibly) non-commutative setting. By studying this framework, we argue that quantum information can be considered as a particular case of a huge family of non-commutative extensions of its classical counterpart. In any conceivable information theory, the possibility of dealing with different kinds of information measures plays a key role. Here, we generalize a notion of state spectrum, allowing us to introduce a majorization relation and a new family of generalized entropic measures. Full article
(This article belongs to the Special Issue Information: Meanings and Interpretations)
Show Figures

Figure 1

1365 KiB  
Article
Comparison Based on Exergetic Analyses of Two Hot Air Engines: A Gamma Type Stirling Engine and an Open Joule Cycle Ericsson Engine
by Houda Hachem, Marie Creyx, Ramla Gheith, Eric Delacourt, Céline Morin, Fethi Aloui and Sassi Ben Nasrallah
Entropy 2015, 17(11), 7331-7348; https://doi.org/10.3390/e17117331 - 28 Oct 2015
Cited by 9 | Viewed by 10287
Abstract
In this paper, a comparison of exergetic models between two hot air engines (a Gamma type Stirling prototype having a maximum output mechanical power of 500 W and an Ericsson hot air engine with a maximum power of 300 W) is made. Referring [...] Read more.
In this paper, a comparison of exergetic models between two hot air engines (a Gamma type Stirling prototype having a maximum output mechanical power of 500 W and an Ericsson hot air engine with a maximum power of 300 W) is made. Referring to previous energetic analyses, exergetic models are set up in order to quantify the exergy destruction and efficiencies in each type of engine. The repartition of the exergy fluxes in each part of the two engines are determined and represented in Sankey diagrams, using dimensionless exergy fluxes. The results show a similar proportion in both engines of destroyed exergy compared to the exergy flux from the hot source. The compression cylinders generate the highest exergy destruction, whereas the expansion cylinders generate the lowest one. The regenerator of the Stirling engine increases the exergy resource at the inlet of the expansion cylinder, which might be also set up in the Ericsson engine, using a preheater between the exhaust air and the compressed air transferred to the hot heat exchanger. Full article
(This article belongs to the Special Issue Entropy Generation in Thermal Systems and Processes 2015)
Show Figures

Figure 1

754 KiB  
Article
Measurement, Interpretation and Information
by Olimpia Lombardi, Sebastian Fortin and Cristian López
Entropy 2015, 17(11), 7310-7330; https://doi.org/10.3390/e17117310 - 28 Oct 2015
Cited by 6 | Viewed by 4709
Abstract
During many years since the birth of quantum mechanics, instrumentalist interpretations prevailed: the meaning of the theory was expressed in terms of measurements results. However, in the last decades, several attempts to interpret it from a realist viewpoint have been proposed. Among them, [...] Read more.
During many years since the birth of quantum mechanics, instrumentalist interpretations prevailed: the meaning of the theory was expressed in terms of measurements results. However, in the last decades, several attempts to interpret it from a realist viewpoint have been proposed. Among them, modal interpretations supply a realist non-collapse account, according to which the system always has definite properties and the quantum state represents possibilities, not actualities. But the traditional modal interpretations faced some conceptual problems when addressing imperfect measurements. The modal-Hamiltonian interpretation, on the contrary, proved to be able to supply an adequate account of the measurement problem, both in its ideal and its non-ideal versions. Moreover, in the non-ideal case, it gives a precise criterion to distinguish between reliable and non-reliable measurements. Nevertheless, that criterion depends on the particular state of the measured system, and this might be considered as a shortcoming of the proposal. In fact, one could ask for a criterion of reliability that does not depend on the features of what is measured but only on the properties of the measurement device. The aim of this article is precisely to supply such a criterion: we will adopt an informational perspective for this purpose.During many years since the birth of quantum mechanics, instrumentalistinterpretations prevailed: the meaning of the theory was expressed in terms of measurementsresults. However, in the last decades, several attempts to interpret it from a realist viewpointhave been proposed. Among them, modal interpretations supply a realist non-collapseaccount, according to which the system always has definite properties and the quantum staterepresents possibilities, not actualities. But the traditional modal interpretations faced someconceptual problems when addressing imperfect measurements. The modal-Hamiltonianinterpretation, on the contrary, proved to be able to supply an adequate account of themeasurement problem, both in its ideal and its non-ideal versions. Moreover, in the non-idealcase, it gives a precise criterion to distinguish between reliable and non-reliable measurements.Nevertheless, that criterion depends on the particular state of the measured system, and thismight be considered as a shortcoming of the proposal. In fact, one could ask for a criterionof reliability that does not depend on the features of what is measured but only on theproperties of the measurement device. The aim of this article is precisely to supply such acriterion: we will adopt an informational perspective for this purpose. Full article
(This article belongs to the Special Issue Information: Meanings and Interpretations)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop