Next Issue
Volume 21, February
Previous Issue
Volume 20, December
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 21, Issue 1 (January 2019) – 99 articles

Cover Story (view full-size image): A key aspect of the brain activity underlying consciousness and cognition appears to be complex dynamics—meaning that brain activity shows a balance between integration and segregation. There have now been many attempts to build useful measures of complexity, most notably within Integrated Information Theory. In this paper, we describe, using a uniform notation, six different candidate measures of integrated information, then we set out the intuitions behind each, and explore how they behave in a variety of network models. The most striking finding is that the measures all behave very differently: no two measures show consistent agreement across all our analyses. By rigorously comparing these measures, we are able to identify those that better reflect the underlying intuitions of integrated information, which we believe will be of help as these measures continue to be developed and refined. View Paper [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 281 KiB  
Article
Simple Stopping Criteria for Information Theoretic Feature Selection
by Shujian Yu and José C. Príncipe
Entropy 2019, 21(1), 99; https://doi.org/10.3390/e21010099 - 21 Jan 2019
Cited by 8 | Viewed by 4359
Abstract
Feature selection aims to select the smallest feature subset that yields the minimum generalization error. In the rich literature in feature selection, information theory-based approaches seek a subset of features such that the mutual information between the selected features and the class labels [...] Read more.
Feature selection aims to select the smallest feature subset that yields the minimum generalization error. In the rich literature in feature selection, information theory-based approaches seek a subset of features such that the mutual information between the selected features and the class labels is maximized. Despite the simplicity of this objective, there still remain several open problems in optimization. These include, for example, the automatic determination of the optimal subset size (i.e., the number of features) or a stopping criterion if the greedy searching strategy is adopted. In this paper, we suggest two stopping criteria by just monitoring the conditional mutual information (CMI) among groups of variables. Using the recently developed multivariate matrix-based Rényi’s α-entropy functional, which can be directly estimated from data samples, we showed that the CMI among groups of variables can be easily computed without any decomposition or approximation, hence making our criteria easy to implement and seamlessly integrated into any existing information theoretic feature selection methods with a greedy search strategy. Full article
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)
Show Figures

Figure 1

16 pages, 2336 KiB  
Article
Cooling Effectiveness of a Data Center Room under Overhead Airflow via Entropy Generation Assessment in Transient Scenarios
by Luis Silva-Llanca, Marcelo del Valle, Alfonso Ortega and Andrés J. Díaz
Entropy 2019, 21(1), 98; https://doi.org/10.3390/e21010098 - 21 Jan 2019
Cited by 18 | Viewed by 4187
Abstract
Forecasting data center cooling demand remains a primary thermal management challenge in an increasingly larger global energy-consuming industry. This paper proposes a dynamic modeling approach to evaluate two different strategies for delivering cold air into a data center room. The common cooling method [...] Read more.
Forecasting data center cooling demand remains a primary thermal management challenge in an increasingly larger global energy-consuming industry. This paper proposes a dynamic modeling approach to evaluate two different strategies for delivering cold air into a data center room. The common cooling method provides air through perforated floor tiles by means of a centralized distribution system, hindering flow management at the aisle level. We propose an idealized system such that five overhead heat exchangers are located above the aisle and handle the entire server cooling demand. In one case, the overhead heat exchangers force the airflow downwards into the aisle (Overhead Downward Flow (ODF)); in the other case, the flow is forced to move upwards (Overhead Upward Flow (OUF)). A complete fluid dynamic, heat transfer, and thermodynamic analysis is proposed to model the system’s thermal performance under both steady state and transient conditions. Inside the servers and heat exchangers, the flow and heat transfer processes are modeled using a set of differential equations solved in MATLAB™ 2017a. This solution is coupled with ANSYS-Fluent™ 18, which computes the three-dimensional velocity, temperature, and turbulence on the Airside. The two approaches proposed (ODF and OUF) are evaluated and compared by estimating their cooling effectiveness and the local Entropy Generation. The latter allows identifying the zones within the room responsible for increasing the inefficiencies (irreversibilities) of the system. Both approaches demonstrated similar performance, with a small advantage shown by OUF. The results of this investigation demonstrated a promising approach of data center on-demand cooling scenarios. Full article
Show Figures

Figure 1

13 pages, 3497 KiB  
Article
Objective 3D Printed Surface Quality Assessment Based on Entropy of Depth Maps
by Jarosław Fastowicz, Marek Grudziński, Mateusz Tecław and Krzysztof Okarma
Entropy 2019, 21(1), 97; https://doi.org/10.3390/e21010097 - 21 Jan 2019
Cited by 27 | Viewed by 4564
Abstract
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the [...] Read more.
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the surface quality of the 3D printed objects proposed in the paper, which is based on the analysis of depth maps, allows for determining the quality of surfaces during printing for the devices equipped with the built-in 3D scanners. In the case of detected low quality, some corrections can be made or the printing process may be aborted to save the filament, time and energy. The application of the entropy analysis of the 3D scans allows evaluating the surface regularity independently on the color of the filament in contrast to many other possible methods based on the analysis of visible light images. The results obtained using the proposed approach are encouraging and further combination of the proposed approach with camera-based methods might be possible as well. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

18 pages, 2918 KiB  
Article
Fault Diagnosis of Rolling Element Bearings with a Two-Step Scheme Based on Permutation Entropy and Random Forests
by Xiaoming Xue, Chaoshun Li, Suqun Cao, Jinchao Sun and Liyan Liu
Entropy 2019, 21(1), 96; https://doi.org/10.3390/e21010096 - 21 Jan 2019
Cited by 27 | Viewed by 3928
Abstract
This study presents a two-step fault diagnosis scheme combined with statistical classification and random forests-based classification for rolling element bearings. Considering the inequality of features sensitivity in different diagnosis steps, the proposed method utilizes permutation entropy and variational mode decomposition to depict vibration [...] Read more.
This study presents a two-step fault diagnosis scheme combined with statistical classification and random forests-based classification for rolling element bearings. Considering the inequality of features sensitivity in different diagnosis steps, the proposed method utilizes permutation entropy and variational mode decomposition to depict vibration signals under single scale and multiscale. In the first step, the permutation entropy features on the single scale of original signals are extracted and the statistical classification model based on Chebyshev’s inequality is constructed to detect the faults with a preliminary acquaintance of the bearing condition. In the second step, vibration signals with fault conditions are firstly decomposed into a collection of intrinsic mode functions by using variational mode decomposition and then multiscale permutation entropy features derived from each mono-component are extracted to identify the specific fault types. In order to improve the classification ability of the characteristic data, the out-of-bag estimation of random forests is firstly employed to reelect and refine the original multiscale permutation entropy features. Then the refined features are considered as the input data to train the random forests-based classification model. Finally, the condition data of bearings with different fault conditions are employed to evaluate the performance of the proposed method. The results indicate that the proposed method can effectively identify the working conditions and fault types of rolling element bearings. Full article
Show Figures

Figure 1

17 pages, 710 KiB  
Article
Co-Association Matrix-Based Multi-Layer Fusion for Community Detection in Attributed Networks
by Sheng Luo, Zhifei Zhang, Yuanjian Zhang and Shuwen Ma
Entropy 2019, 21(1), 95; https://doi.org/10.3390/e21010095 - 20 Jan 2019
Cited by 8 | Viewed by 3679
Abstract
Community detection is a challenging task in attributed networks, due to the data inconsistency between network topological structure and node attributes. The problem of how to effectively and robustly fuse multi-source heterogeneous data plays an important role in community detection algorithms. Although some [...] Read more.
Community detection is a challenging task in attributed networks, due to the data inconsistency between network topological structure and node attributes. The problem of how to effectively and robustly fuse multi-source heterogeneous data plays an important role in community detection algorithms. Although some algorithms taking both topological structure and node attributes into account have been proposed in recent years, the fusion strategy is simple and usually adopts the linear combination method. As a consequence of this, the detected community structure is vulnerable to small variations of the input data. In order to overcome this challenge, we develop a novel two-layer representation to capture the latent knowledge from both topological structure and node attributes in attributed networks. Then, we propose a weighted co-association matrix-based fusion algorithm (WCMFA) to detect the inherent community structure in attributed networks by using multi-layer fusion strategies. It extends the community detection method from a single-view to a multi-view style, which is consistent with the thinking model of human beings. Experiments show that our method is superior to the state-of-the-art community detection algorithms for attributed networks. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

13 pages, 991 KiB  
Review
Transients as the Basis for Information Flow in Complex Adaptive Systems
by William Sulis
Entropy 2019, 21(1), 94; https://doi.org/10.3390/e21010094 - 20 Jan 2019
Cited by 5 | Viewed by 3460
Abstract
Information is the fundamental currency of naturally occurring complex adaptive systems, whether they are individual organisms or collective social insect colonies. Information appears to be more important than energy in determining the behavior of these systems. However, it is not the quantity of [...] Read more.
Information is the fundamental currency of naturally occurring complex adaptive systems, whether they are individual organisms or collective social insect colonies. Information appears to be more important than energy in determining the behavior of these systems. However, it is not the quantity of information but rather its salience or meaning which is significant. Salience is not, in general, associated with instantaneous events but rather with spatio-temporal transients of events. This requires a shift in theoretical focus from instantaneous states towards spatio-temporal transients as the proper object for studying information flow in naturally occurring complex adaptive systems. A primitive form of salience appears in simple complex systems models in the form of transient induced global response synchronization (TIGoRS). Sparse random samplings of spatio-temporal transients may induce stable collective responses from the system, establishing a stimulus–response relationship between the system and its environment, with the system parsing its environment into salient and non-salient stimuli. In the presence of TIGoRS, an embedded complex dynamical system becomes a primitive automaton, modeled as a Sulis machine. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Show Figures

Figure 1

18 pages, 1206 KiB  
Article
Bayesian Analysis of Femtosecond Pump-Probe Photoelectron-Photoion Coincidence Spectra with Fluctuating Laser Intensities
by Pascal Heim, Michael Rumetshofer, Sascha Ranftl, Bernhard Thaler, Wolfgang E. Ernst, Markus Koch and Wolfgang von der Linden
Entropy 2019, 21(1), 93; https://doi.org/10.3390/e21010093 - 19 Jan 2019
Cited by 2 | Viewed by 3754
Abstract
This paper employs Bayesian probability theory for analyzing data generated in femtosecond pump-probe photoelectron-photoion coincidence (PEPICO) experiments. These experiments allow investigating ultrafast dynamical processes in photoexcited molecules. Bayesian probability theory is consistently applied to data analysis problems occurring in these types of experiments [...] Read more.
This paper employs Bayesian probability theory for analyzing data generated in femtosecond pump-probe photoelectron-photoion coincidence (PEPICO) experiments. These experiments allow investigating ultrafast dynamical processes in photoexcited molecules. Bayesian probability theory is consistently applied to data analysis problems occurring in these types of experiments such as background subtraction and false coincidences. We previously demonstrated that the Bayesian formalism has many advantages, amongst which are compensation of false coincidences, no overestimation of pump-only contributions, significantly increased signal-to-noise ratio, and applicability to any experimental situation and noise statistics. Most importantly, by accounting for false coincidences, our approach allows running experiments at higher ionization rates, resulting in an appreciable reduction of data acquisition times. In addition to our previous paper, we include fluctuating laser intensities, of which the straightforward implementation highlights yet another advantage of the Bayesian formalism. Our method is thoroughly scrutinized by challenging mock data, where we find a minor impact of laser fluctuations on false coincidences, yet a noteworthy influence on background subtraction. We apply our algorithm to data obtained in experiments and discuss the impact of laser fluctuations on the data analysis. Full article
Show Figures

Figure 1

18 pages, 375 KiB  
Article
Remote Sampling with Applications to General Entanglement Simulation
by Gilles Brassard, Luc Devroye and Claude Gravel
Entropy 2019, 21(1), 92; https://doi.org/10.3390/e21010092 - 19 Jan 2019
Cited by 5 | Viewed by 3707
Abstract
We show how to sample exactly discrete probability distributions whose defining parameters are distributed among remote parties. For this purpose, von Neumann’s rejection algorithm is turned into a distributed sampling communication protocol. We study the expected number of bits communicated among the parties [...] Read more.
We show how to sample exactly discrete probability distributions whose defining parameters are distributed among remote parties. For this purpose, von Neumann’s rejection algorithm is turned into a distributed sampling communication protocol. We study the expected number of bits communicated among the parties and also exhibit a trade-off between the number of rounds of the rejection algorithm and the number of bits transmitted in the initial phase. Finally, we apply remote sampling to the simulation of quantum entanglement in its essentially most general form possible, when an arbitrary finite number m of parties share systems of arbitrary finite dimensions on which they apply arbitrary measurements (not restricted to being projective measurements, but restricted to finitely many possible outcomes). In case the dimension of the systems and the number of possible outcomes per party are bounded by a constant, it suffices to communicate an expected O ( m 2 ) bits in order to simulate exactly the outcomes that these measurements would have produced on those systems. Full article
Show Figures

Figure 1

32 pages, 4657 KiB  
Article
Quantitative Quality Evaluation of Software Products by Considering Summary and Comments Entropy of a Reported Bug
by Madhu Kumari, Ananya Misra, Sanjay Misra, Luis Fernandez Sanz, Robertas Damasevicius and V.B. Singh
Entropy 2019, 21(1), 91; https://doi.org/10.3390/e21010091 - 19 Jan 2019
Cited by 22 | Viewed by 4297
Abstract
A software bug is characterized by its attributes. Various prediction models have been developed using these attributes to enhance the quality of software products. The reporting of bugs leads to high irregular patterns. The repository size is also increasing with enormous rate, resulting [...] Read more.
A software bug is characterized by its attributes. Various prediction models have been developed using these attributes to enhance the quality of software products. The reporting of bugs leads to high irregular patterns. The repository size is also increasing with enormous rate, resulting in uncertainty and irregularities. These uncertainty and irregularities are termed as veracity in the context of big data. In order to quantify these irregular and uncertain patterns, the authors have appliedentropy-based measures of the terms reported in the summary and the comments submitted by the users. Both uncertainties and irregular patterns have been taken care of byentropy-based measures. In this paper, the authors considered that the bug fixing process does not only depend upon the calendar time, testing effort and testing coverage, but it also depends on the bug summary description and comments. The paper proposed bug dependency-based mathematical models by considering the summary description of bugs and comments submitted by users in terms of the entropy-based measures. The models were validated on different Eclipse project products. The models proposed in the literature have different types of growth curves. The models mainly follow exponential, S-shaped or mixtures of both types of curves. In this paper, the proposed models were compared with the modelsfollowingexponential, S-shaped and mixtures of both types of curves. Full article
Show Figures

Figure 1

12 pages, 2789 KiB  
Article
Using Multiscale Entropy to Assess the Efficacy of Local Cooling on Reactive Hyperemia in People with a Spinal Cord Injury
by Fuyuan Liao, Tim D. Yang, Fu-Lien Wu, Chunmei Cao, Ayman Mohamed and Yih-Kuen Jan
Entropy 2019, 21(1), 90; https://doi.org/10.3390/e21010090 - 18 Jan 2019
Cited by 10 | Viewed by 3939
Abstract
Pressure ulcers are one of the most common complications of a spinal cord injury (SCI). Prolonged unrelieved pressure is thought to be the primary causative factor resulting in tissue ischemia and eventually pressure ulcers. Previous studies suggested that local cooling reduces skin ischemia [...] Read more.
Pressure ulcers are one of the most common complications of a spinal cord injury (SCI). Prolonged unrelieved pressure is thought to be the primary causative factor resulting in tissue ischemia and eventually pressure ulcers. Previous studies suggested that local cooling reduces skin ischemia of the compressed soft tissues based on smaller hyperemic responses. However, the effect of local cooling on nonlinear properties of skin blood flow (SBF) during hyperemia is unknown. In this study, 10 wheelchair users with SCI and 10 able-bodied (AB) controls underwent three experimental protocols, each of which included a 10-min period as baseline, a 20-min intervention period, and a 20-min period for recovering SBF. SBF was measured using a laser Doppler flowmetry. During the intervention period, a pressure of 60 mmHg was applied to the sacral skin, while three skin temperature settings were tested, including no temperature change, a decrease by 10 °C, and an increase by 10 °C, respectively. A multiscale entropy (MSE) method was employed to quantify the degree of regularity of blood flow oscillations (BFO) associated with the SBF control mechanisms during baseline and reactive hyperemia. The results showed that under pressure with cooling, skin BFO both in people with SCI and AB controls were more regular at multiple time scales during hyperemia compared to baseline, whereas under pressure with no temperature change and particularly pressure with heating, BFO were more irregular during hyperemia compared to baseline. Moreover, the results of surrogate tests indicated that changes in the degree of regularity of BFO from baseline to hyperemia were only partially attributed to changes in relative amplitudes of endothelial, neurogenic, and myogenic components of BFO. These findings support the use of MSE to assess the efficacy of local cooling on reactive hyperemia and assess the degree of skin ischemia in people with SCI. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Show Figures

Figure 1

15 pages, 325 KiB  
Article
Poincaré and Log–Sobolev Inequalities for Mixtures
by André Schlichting
Entropy 2019, 21(1), 89; https://doi.org/10.3390/e21010089 - 18 Jan 2019
Cited by 5 | Viewed by 3794
Abstract
This work studies mixtures of probability measures on R n and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ 2 -distance. The estimation [...] Read more.
This work studies mixtures of probability measures on R n and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ 2 -distance. The estimation of those constants for a mixture can be far more subtle than it is for its parts. Even mixing Gaussian measures may produce a measure with a Hamiltonian potential possessing multiple wells leading to metastability and large constants in Sobolev type inequalities. In particular, the Poincaré constant stays bounded in the mixture parameter, whereas the log–Sobolev may blow up as the mixture ratio goes to 0 or 1. This observation generalizes the one by Chafaï and Malrieu to the multidimensional case. The behavior is shown for a class of examples to be not only a mere artifact of the method. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
17 pages, 2879 KiB  
Article
Symmetries among Multivariate Information Measures Explored Using Möbius Operators
by David J. Galas and Nikita A. Sakhanenko
Entropy 2019, 21(1), 88; https://doi.org/10.3390/e21010088 - 18 Jan 2019
Cited by 6 | Viewed by 3606
Abstract
Relations between common information measures include the duality relations based on Möbius inversion on lattices, which are the direct consequence of the symmetries of the lattices of the sets of variables (subsets ordered by inclusion). In this paper we use the lattice and [...] Read more.
Relations between common information measures include the duality relations based on Möbius inversion on lattices, which are the direct consequence of the symmetries of the lattices of the sets of variables (subsets ordered by inclusion). In this paper we use the lattice and functional symmetries to provide a unifying formalism that reveals some new relations and systematizes the symmetries of the information functions. To our knowledge, this is the first systematic examination of the full range of relationships of this class of functions. We define operators on functions on these lattices based on the Möbius inversions that map functions into one another, which we call Möbius operators, and show that they form a simple group isomorphic to the symmetric group S3. Relations among the set of functions on the lattice are transparently expressed in terms of the operator algebra, and, when applied to the information measures, can be used to derive a wide range of relationships among diverse information measures. The Möbius operator algebra is then naturally generalized which yields an even wider range of new relationships. Full article
Show Figures

Figure 1

13 pages, 6061 KiB  
Article
Parallel Lives: A Local-Realistic Interpretation of “Nonlocal” Boxes
by Gilles Brassard and Paul Raymond-Robichaud
Entropy 2019, 21(1), 87; https://doi.org/10.3390/e21010087 - 18 Jan 2019
Cited by 19 | Viewed by 8308
Abstract
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in [...] Read more.
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in the simplest possible manner. Along the way, we reinterpret the celebrated 1935 argument of Einstein, Podolsky and Rosen, and come to the conclusion that they were right in their questioning the completeness of the Copenhagen version of quantum theory, provided one believes in a local-realistic universe. Throughout our journey, we strive to explain our views from first principles, without expecting mathematical sophistication nor specialized prior knowledge from the reader. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

11 pages, 584 KiB  
Article
Information Entropy of Tight-Binding Random Networks with Losses and Gain: Scaling and Universality
by C. T. Martínez-Martínez and J. A. Méndez-Bermúdez
Entropy 2019, 21(1), 86; https://doi.org/10.3390/e21010086 - 18 Jan 2019
Cited by 8 | Viewed by 3217
Abstract
We study the localization properties of the eigenvectors, characterized by their information entropy, of tight-binding random networks with balanced losses and gain. The random network model, which is based on Erdős–Rényi (ER) graphs, is defined by three parameters: the network size N, [...] Read more.
We study the localization properties of the eigenvectors, characterized by their information entropy, of tight-binding random networks with balanced losses and gain. The random network model, which is based on Erdős–Rényi (ER) graphs, is defined by three parameters: the network size N, the network connectivity α , and the losses-and-gain strength γ . Here, N and α are the standard parameters of ER graphs, while we introduce losses and gain by including complex self-loops on all vertices with the imaginary amplitude i γ with random balanced signs, thus breaking the Hermiticity of the corresponding adjacency matrices and inducing complex spectra. By the use of extensive numerical simulations, we define a scaling parameter ξ ξ ( N , α , γ ) that fixes the localization properties of the eigenvectors of our random network model; such that, when ξ < 0.1 ( 10 < ξ ), the eigenvectors are localized (extended), while the localization-to-delocalization transition occurs for 0.1 < ξ < 10 . Moreover, to extend the applicability of our findings, we demonstrate that for fixed ξ , the spectral properties (characterized by the position of the eigenvalues on the complex plane) of our network model are also universal; i.e., they do not depend on the specific values of the network parameters. Full article
(This article belongs to the Special Issue Complex Networks from Information Measures)
Show Figures

Figure 1

12 pages, 1601 KiB  
Article
Performance Analysis of a Proton Exchange Membrane Fuel Cell Based Syngas
by Xiuqin Zhang, Qiubao Lin, Huiying Liu, Xiaowei Chen, Sunqing Su and Meng Ni
Entropy 2019, 21(1), 85; https://doi.org/10.3390/e21010085 - 18 Jan 2019
Cited by 4 | Viewed by 3134
Abstract
External chemical reactors for steam reforming and water gas shift reactions are needed for a proton exchange membrane (PEM) fuel cell system using syngas fuel. For the preheating of syngas and stable steam reforming reaction at 600 °C, residual hydrogen from a fuel [...] Read more.
External chemical reactors for steam reforming and water gas shift reactions are needed for a proton exchange membrane (PEM) fuel cell system using syngas fuel. For the preheating of syngas and stable steam reforming reaction at 600 °C, residual hydrogen from a fuel cell and a certain amount of additional syngas are burned. The combustion temperature is calculated and the molar ratio of the syngas into burner and steam reformer is determined. Based on thermodynamics and electrochemistry, the electric power density and energy conversion efficiency of a PEM fuel cell based syngas are expressed. The effects of the temperature, the hydrogen utilization factor at the anode, and the molar ratio of the syngas into burner and steam reformer on the performance of a PEM fuel cell are discussed. To achieve the maximum power density or efficiency, the key parameters are determined. This manuscript presents the detailed operating process of a PEM fuel cell, the allocation of the syngas for combustion and electric generation, and the feasibility of a PEM fuel cell using syngas. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics III)
Show Figures

Figure 1

14 pages, 3363 KiB  
Article
Desalination Processes’ Efficiency and Future Roadmap
by Muhammad Wakil Shahzad, Muhammad Burhan, Doskhan Ybyraiymkul and Kim Choon Ng
Entropy 2019, 21(1), 84; https://doi.org/10.3390/e21010084 - 18 Jan 2019
Cited by 56 | Viewed by 7140
Abstract
For future sustainable seawater desalination, the importance of achieving better energy efficiency of the existing 19,500 commercial-scale desalination plants cannot be over emphasized. The major concern of the desalination industry is the inadequate approach to energy efficiency evaluation of diverse seawater desalination processes [...] Read more.
For future sustainable seawater desalination, the importance of achieving better energy efficiency of the existing 19,500 commercial-scale desalination plants cannot be over emphasized. The major concern of the desalination industry is the inadequate approach to energy efficiency evaluation of diverse seawater desalination processes by omitting the grade of energy supplied. These conventional approaches would suffice if the efficacy comparison were to be conducted for the same energy input processes. The misconception of considering all derived energies as equivalent in the desalination industry has severe economic and environmental consequences. In the realms of the energy and desalination system planners, serious judgmental errors in the process selection of green installations are made unconsciously as the efficacy data are either flawed or inaccurate. Inferior efficacy technologies’ implementation decisions were observed in many water-stressed countries that can burden a country’s economy immediately with higher unit energy cost as well as cause more undesirable environmental effects on the surroundings. In this article, a standard primary energy-based thermodynamic framework is presented that addresses energy efficacy fairly and accurately. It shows clearly that a thermally driven process consumes 2.5–3% of standard primary energy (SPE) when combined with power plants. A standard universal performance ratio-based evaluation method has been proposed that showed all desalination processes performance varies from 10–14% of the thermodynamic limit. To achieve 2030 sustainability goals, innovative processes are required to meet 25–30% of the thermodynamic limit. Full article
Show Figures

Figure 1

10 pages, 598 KiB  
Article
Reconstruction of PET Images Using Cross-Entropy and Field of Experts
by Jose Mejia, Alberto Ochoa and Boris Mederos
Entropy 2019, 21(1), 83; https://doi.org/10.3390/e21010083 - 18 Jan 2019
Cited by 4 | Viewed by 3195
Abstract
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In [...] Read more.
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In this paper, we propose the use of a field of experts to model a priori structure and capture anatomical spatial dependencies of the PET images to address the problems of noise and low count data, which make the reconstruction of the image difficult. We reconstruct PET images by using a modified MXE algorithm, which minimizes a objective function with the cross-entropy as a fidelity term, while the field of expert model is incorporated as a regularizing term. Comparisons with the expectation maximization algorithm and a iterative method with a prior penalizing relative differences showed that the proposed method can lead to accurate estimation of the image, especially with acquisitions at low count rate. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

14 pages, 403 KiB  
Article
Approximating Ground States by Neural Network Quantum States
by Ying Yang, Chengyang Zhang and Huaixin Cao
Entropy 2019, 21(1), 82; https://doi.org/10.3390/e21010082 - 17 Jan 2019
Cited by 5 | Viewed by 3807
Abstract
Motivated by the Carleo’s work (Science, 2017, 355: 602), we focus on finding the neural network quantum statesapproximation of the unknown ground state of a given Hamiltonian H in terms of the best relative error and explore the influences of sum, tensor product, [...] Read more.
Motivated by the Carleo’s work (Science, 2017, 355: 602), we focus on finding the neural network quantum statesapproximation of the unknown ground state of a given Hamiltonian H in terms of the best relative error and explore the influences of sum, tensor product, local unitary of Hamiltonians on the best relative error. Besides, we illustrate our method with some examples. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

19 pages, 5969 KiB  
Article
Partial Discharge Fault Diagnosis Based on Multi-Scale Dispersion Entropy and a Hypersphere Multiclass Support Vector Machine
by Haikun Shang, Feng Li and Yingjie Wu
Entropy 2019, 21(1), 81; https://doi.org/10.3390/e21010081 - 17 Jan 2019
Cited by 21 | Viewed by 3900
Abstract
Partial discharge (PD) fault analysis is an important tool for insulation condition diagnosis of electrical equipment. In order to conquer the limitations of traditional PD fault diagnosis, a novel feature extraction approach based on variational mode decomposition (VMD) and multi-scale dispersion entropy (MDE) [...] Read more.
Partial discharge (PD) fault analysis is an important tool for insulation condition diagnosis of electrical equipment. In order to conquer the limitations of traditional PD fault diagnosis, a novel feature extraction approach based on variational mode decomposition (VMD) and multi-scale dispersion entropy (MDE) is proposed. Besides, a hypersphere multiclass support vector machine (HMSVM) is used for PD pattern recognition with extracted PD features. Firstly, the original PD signal is decomposed with VMD to obtain intrinsic mode functions (IMFs). Secondly proper IMFs are selected according to central frequency observation and MDE values in each IMF are calculated. And then principal component analysis (PCA) is introduced to extract effective principle components in MDE. Finally, the extracted principle factors are used as PD features and sent to HMSVM classifier. Experiment results demonstrate that, PD feature extraction method based on VMD-MDE can extract effective characteristic parameters that representing dominant PD features. Recognition results verify the effectiveness and superiority of the proposed PD fault diagnosis method. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Show Figures

Figure 1

15 pages, 626 KiB  
Article
Efficient High-Dimensional Quantum Key Distribution with Hybrid Encoding
by Yonggi Jo, Hee Su Park, Seung-Woo Lee and Wonmin Son
Entropy 2019, 21(1), 80; https://doi.org/10.3390/e21010080 - 17 Jan 2019
Cited by 9 | Viewed by 4639
Abstract
We propose a schematic setup of quantum key distribution (QKD) with an improved secret key rate based on high-dimensional quantum states. Two degrees-of-freedom of a single photon, orbital angular momentum modes, and multi-path modes, are used to encode secret key information. Its practical [...] Read more.
We propose a schematic setup of quantum key distribution (QKD) with an improved secret key rate based on high-dimensional quantum states. Two degrees-of-freedom of a single photon, orbital angular momentum modes, and multi-path modes, are used to encode secret key information. Its practical implementation consists of optical elements that are within the reach of current technologies such as a multiport interferometer. We show that the proposed feasible protocol has improved the secret key rate with much sophistication compared to the previous 2-dimensional protocol known as the detector-device-independent QKD. Full article
(This article belongs to the Special Issue Entropic Uncertainty Relations and Their Applications)
Show Figures

Figure 1

11 pages, 918 KiB  
Article
Quaternion Entropy for Analysis of Gait Data
by Agnieszka Szczęsna
Entropy 2019, 21(1), 79; https://doi.org/10.3390/e21010079 - 17 Jan 2019
Cited by 12 | Viewed by 3781
Abstract
Nonlinear dynamical analysis is a powerful approach to understanding biological systems. One of the most used metrics of system complexities is the Kolmogorov entropy. Long input signals without noise are required for the calculation, which are very hard to obtain in real situations. [...] Read more.
Nonlinear dynamical analysis is a powerful approach to understanding biological systems. One of the most used metrics of system complexities is the Kolmogorov entropy. Long input signals without noise are required for the calculation, which are very hard to obtain in real situations. Techniques allowing the estimation of entropy directly from time signals are statistics like approximate and sample entropy. Based on that, the new measurement for quaternion signal is introduced. This work presents an example of application of a nonlinear time series analysis by using the new quaternion, approximate entropy to analyse human gait kinematic data. The quaternion entropy was applied to analyse the quaternion signal which represents the segments orientations in time during the human gait. The research was aimed at the assessment of the influence of both walking speed and ground slope on the gait control during treadmill walking. Gait data was obtained by the optical motion capture system. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

15 pages, 3969 KiB  
Article
Uncertainty Assessment of Hyperspectral Image Classification: Deep Learning vs. Random Forest
by Majid Shadman Roodposhti, Jagannath Aryal, Arko Lucieer and Brett A. Bryan
Entropy 2019, 21(1), 78; https://doi.org/10.3390/e21010078 - 16 Jan 2019
Cited by 29 | Viewed by 6591
Abstract
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test [...] Read more.
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test data but also incapable of capturing the spatial variation in classification error. Here, we apply and compare two uncertainty assessment techniques that do not rely on test data availability and enable the spatial characterisation of classification accuracy before the validation phase, promoting the assessment of error propagation within the classified imagery products. We compared the performance of emerging deep neural network (DNN) with the popular random forest (RF) technique. Uncertainty assessment was implemented by calculating the Shannon entropy of class probabilities predicted by DNN and RF for every pixel. The classification uncertainties of DNN and RF were quantified for two different hyperspectral image datasets—Salinas and Indian Pines. We then compared the uncertainty against the classification accuracy of the techniques represented by a modified root mean square error (RMSE). The results indicate that considering modified RMSE values for various sample sizes of both datasets, the derived entropy based on the DNN algorithm is a better estimate of classification accuracy and hence provides a superior uncertainty estimate at the pixel level. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

13 pages, 259 KiB  
Article
Logical Structures Underlying Quantum Computing
by Federico Holik, Giuseppe Sergioli, Hector Freytes and Angel Plastino
Entropy 2019, 21(1), 77; https://doi.org/10.3390/e21010077 - 16 Jan 2019
Cited by 6 | Viewed by 3562
Abstract
In this work we advance a generalization of quantum computational logics capable of dealing with some important examples of quantum algorithms. We outline an algebraic axiomatization of these structures. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
9 pages, 892 KiB  
Article
Dynamical and Coupling Structure of Pulse-Coupled Networks in Maximum Entropy Analysis
by Zhi-Qin John Xu, Douglas Zhou and David Cai
Entropy 2019, 21(1), 76; https://doi.org/10.3390/e21010076 - 16 Jan 2019
Cited by 2 | Viewed by 3126
Abstract
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions [...] Read more.
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions in MEP analysis, and the anatomical coupling structure of pulse-coupled networks and it helps to understand how a sparse coupling structure could lead to a sparse coding by effective interactions. This relation quantitatively displays how the dynamical structure is closely related to the anatomical coupling structure. Full article
Show Figures

Figure 1

11 pages, 4743 KiB  
Article
Effects of Silicon Content on the Microstructures and Mechanical Properties of (AlCrTiZrV)-Six-N High-Entropy Alloy Films
by Jingrui Niu, Wei Li, Ping Liu, Ke Zhang, Fengcang Ma, Xiaohong Chen, Rui Feng and Peter K. Liaw
Entropy 2019, 21(1), 75; https://doi.org/10.3390/e21010075 - 16 Jan 2019
Cited by 8 | Viewed by 4608
Abstract
A series of (AlCrTiZrV)-Six-N films with different silicon contents were deposited on monocrystalline silicon substrates by direct-current (DC) magnetron sputtering. The films were characterized by the X-ray diffractometry (XRD), scanning electron microscopy (SEM), high-resolution transmission electron microscopy (HRTEM), and nano-indentation techniques. [...] Read more.
A series of (AlCrTiZrV)-Six-N films with different silicon contents were deposited on monocrystalline silicon substrates by direct-current (DC) magnetron sputtering. The films were characterized by the X-ray diffractometry (XRD), scanning electron microscopy (SEM), high-resolution transmission electron microscopy (HRTEM), and nano-indentation techniques. The effects of the silicon content on the microstructures and mechanical properties of the films were investigated. The experimental results show that the (AlCrTiZrV)N films grow in columnar grains and present a (200) preferential growth orientation. The addition of the silicon element leads to the disappearance of the (200) peak, and the grain refinement of the (AlCrTiZrV)-Six-N films. Meanwhile, the reticular amorphous phase is formed, thus developing the nanocomposite structure with the nanocrystalline structures encapsulated by the amorphous phase. With the increase of the silicon content, the mechanical properties first increase and then decrease. The maximal hardness and modulus of the film reach 34.3 GPa and 301.5 GPa, respectively, with the silicon content (x) of 8% (volume percent). The strengthening effect of the (AlCrTiZrV)-Six-N film can be mainly attributed to the formation of the nanocomposite structure. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Figure 1

10 pages, 2022 KiB  
Article
Thermodynamic Analysis of Entropy Generation Minimization in Thermally Dissipating Flow Over a Thin Needle Moving in a Parallel Free Stream of Two Newtonian Fluids
by Ilyas Khan, Waqar A. Khan, Muhammad Qasim, Idrees Afridi and Sayer O. Alharbi
Entropy 2019, 21(1), 74; https://doi.org/10.3390/e21010074 - 16 Jan 2019
Cited by 21 | Viewed by 4376
Abstract
This article is devoted to study sustainability of entropy generation in an incompressible thermal flow of Newtonian fluids over a thin needle that is moving in a parallel stream. Two types of Newtonian fluids (water and air) are considered in this work. The [...] Read more.
This article is devoted to study sustainability of entropy generation in an incompressible thermal flow of Newtonian fluids over a thin needle that is moving in a parallel stream. Two types of Newtonian fluids (water and air) are considered in this work. The energy dissipation term is included in the energy equation. Here, it is presumed that u (the free stream velocity) is in the positive axial direction (x-axis) and the motion of the thin needle is in the opposite or similar direction as the free stream velocity. The reduced self-similar governing equations are solved numerically with the aid of the shooting technique with the fourth-order-Runge-Kutta method. Using similarity transformations, it is possible to obtain the expression for dimensionless form of the volumetric entropy generation rate and the Bejan number. The effects of Prandtl number, Eckert number and dimensionless temperature parameter are discussed graphically in details for water and air taken as Newtonian fluids. Full article
(This article belongs to the Special Issue Entropy Generation Minimization II)
Show Figures

Figure 1

19 pages, 342 KiB  
Article
Negation of Belief Function Based on the Total Uncertainty Measure
by Kangyang Xie and Fuyuan Xiao
Entropy 2019, 21(1), 73; https://doi.org/10.3390/e21010073 - 15 Jan 2019
Cited by 21 | Viewed by 3467
Abstract
The negation of probability provides a new way of looking at information representation. However, the negation of basic probability assignment (BPA) is still an open issue. To address this issue, a novel negation method of basic probability assignment based on total uncertainty measure [...] Read more.
The negation of probability provides a new way of looking at information representation. However, the negation of basic probability assignment (BPA) is still an open issue. To address this issue, a novel negation method of basic probability assignment based on total uncertainty measure is proposed in this paper. The uncertainty of non-singleton elements in the power set is taken into account. Compared with the negation method of a probability distribution, the proposed negation method of BPA differs becausethe BPA of a certain element is reassigned to the other elements in the power set where the weight of reassignment is proportional to the cardinality of intersection of the element and each remaining element in the power set. Notably, the proposed negation method of BPA reduces to the negation of probability distribution as BPA reduces to classical probability. Furthermore, it is proved mathematically that our proposed negation method of BPA is indeed based on the maximum uncertainty. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 321 KiB  
Article
A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range ofSignal-to-Noise Ratio
by Monika Pinchas
Entropy 2019, 21(1), 72; https://doi.org/10.3390/e21010072 - 15 Jan 2019
Cited by 7 | Viewed by 2863
Abstract
In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of [...] Read more.
In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of these algorithms is the heavy computational burden involved in calculating the expression for the conditional expectation. In addition, none of these techniques are applicable for signal-to-noise ratios lower than 7 dB. In this paper, I propose a new closed-form approximated expression for the conditional expectation based on a previously obtained expression where the equalized output probability density function is calculated via the approximated input probability density function which itself is approximated with the maximum entropy density approximation technique. This newly proposed expression has a reduced computational burden compared with the previously obtained expressions for the conditional expectation based on the maximum entropy approximation technique. The simulation results indicate that the newly proposed algorithm with the newly proposed Lagrange multipliers is suitable for signal-to-noise ratio values down to 0 dB and has an improved equalization performance from the residual inter-symbol-interference point of view compared to the previously obtained algorithms based on the conditional expectation obtained via the maximum entropy technique. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

32 pages, 308 KiB  
Editorial
Acknowledgement to Reviewers of Entropy in 2018
by Entropy Editorial Office
Entropy 2019, 21(1), 71; https://doi.org/10.3390/e21010071 - 15 Jan 2019
Viewed by 3233
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
13 pages, 1460 KiB  
Article
The Effect of Cognitive Resource Competition Due to Dual-Tasking on the Irregularity and Control of Postural Movement Components
by Thomas Haid and Peter Federolf
Entropy 2019, 21(1), 70; https://doi.org/10.3390/e21010070 - 15 Jan 2019
Cited by 13 | Viewed by 4722
Abstract
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found [...] Read more.
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found in measures of sway irregularity, we hypothesized (H1) that the irregularity of PMs and their respective control, and the control tightness will display the n-shape. Furthermore, according to the minimal intervention principle (H2) different PMs should be affected differently. Finally, (H3) we expected stronger dual-tasking effects in the older population, due to limited cognitive resources. We measured the kinematics of forty-one healthy volunteers (23 aged 26 ± 3; 18 aged 59 ± 4) performing 80 s tandem stances in five conditions (single-task and auditory n-back task; n = 1–4), and computed sample entropies on PM time-series and two novel measures of control tightness. In the PM most critical for stability, the control tightness decreased steadily, and in contrast to H3, decreased further for the younger group. Nevertheless, we found n-shapes in most variables with differing magnitudes, supporting H1 and H2. These results suggest that the control tightness might deteriorate steadily with increased cognitive load in critical movements despite the otherwise eminent n-shaped relationship. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop