Next Issue
Volume 21, July
Previous Issue
Volume 21, May
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 21, Issue 6 (June 2019) – 85 articles

Cover Story (view full-size image): The paper established a thermodynamic-like phenomenon related to the (re)programmability of computer programs as discrete dynamical systems subject to the laws of information and algorithmic probability. The lack of symmetry in the operation of rewiring networks seen as computer programs, with regards to algorithmic randomness, indicates a natural direction for digital thermodynamics. After demonstrating that the set of maximum entropy graphs is not the same as the set of maximum algorithmically complex graphs, the paper introduces a principle akin to maximum entropy but based on algorithmic complexity. The principle of maximum algorithmic randomness (MAR) can, therefore, be viewed as a refinement of Maxent, as it is a proper subset of algorithmically random graphs, and also possesses the highest entropy. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 4798 KiB  
Article
A New Feature Extraction Method for Ship-Radiated Noise Based on Improved CEEMDAN, Normalized Mutual Information and Multiscale Improved Permutation Entropy
by Zhe Chen, Yaan Li, Renjie Cao, Wasiq Ali, Jing Yu and Hongtao Liang
Entropy 2019, 21(6), 624; https://doi.org/10.3390/e21060624 - 25 Jun 2019
Cited by 22 | Viewed by 3815
Abstract
Extracting useful features from ship-radiated noise can improve the performance of passive sonar. The entropy feature is an important supplement to existing technologies for ship classification. However, the existing entropy feature extraction methods for ship-radiated noise are less reliable under noisy conditions because [...] Read more.
Extracting useful features from ship-radiated noise can improve the performance of passive sonar. The entropy feature is an important supplement to existing technologies for ship classification. However, the existing entropy feature extraction methods for ship-radiated noise are less reliable under noisy conditions because they lack noise reduction procedures or are single-scale based. In order to simultaneously solve these problems, a new feature extraction method is proposed based on improved complementary ensemble empirical mode decomposition with adaptive noise (ICEEMDAN), normalized mutual information (norMI), and multiscale improved permutation entropy (MIPE). Firstly, the ICEEMDAN is utilized to obtain a group of intrinsic mode functions (IMFs) from ship-radiated noise. The noise reduction process is then conducted by identifying and eliminating the noise IMFs. Next, the norMI and MIPE of the signal-dominant IMFs are calculated, respectively; and the norMI is used to weigh the corresponding MIPE result. The multi-scale entropy feature is finally defined as the sum of the weighted MIPE results. Experimental results show that the recognition rate of the proposed method achieves 90.67% and 83%, respectively, under noise free and 5 dB conditions, which is much higher than existing entropy feature extraction algorithms. Hence, the proposed method is more reliable and suitable for feature extraction of ship-radiated noise in practice. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics)
Show Figures

Figure 1

20 pages, 1251 KiB  
Article
Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples
by Damián G. Hernández and Inés Samengo
Entropy 2019, 21(6), 623; https://doi.org/10.3390/e21060623 - 25 Jun 2019
Cited by 11 | Viewed by 6835
Abstract
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual [...] Read more.
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

16 pages, 3133 KiB  
Article
Multi-Scale Feature Fusion for Coal-Rock Recognition Based on Completed Local Binary Pattern and Convolution Neural Network
by Xiaoyang Liu, Wei Jing, Mingxuan Zhou and Yuxing Li
Entropy 2019, 21(6), 622; https://doi.org/10.3390/e21060622 - 25 Jun 2019
Cited by 21 | Viewed by 3740
Abstract
Automatic coal-rock recognition is one of the critical technologies for intelligent coal mining and processing. Most existing coal-rock recognition methods have some defects, such as unsatisfactory performance and low robustness. To solve these problems, and taking distinctive visual features of coal and rock [...] Read more.
Automatic coal-rock recognition is one of the critical technologies for intelligent coal mining and processing. Most existing coal-rock recognition methods have some defects, such as unsatisfactory performance and low robustness. To solve these problems, and taking distinctive visual features of coal and rock into consideration, the multi-scale feature fusion coal-rock recognition (MFFCRR) model based on a multi-scale Completed Local Binary Pattern (CLBP) and a Convolution Neural Network (CNN) is proposed in this paper. Firstly, the multi-scale CLBP features are extracted from coal-rock image samples in the Texture Feature Extraction (TFE) sub-model, which represents texture information of the coal-rock image. Secondly, the high-level deep features are extracted from coal-rock image samples in the Deep Feature Extraction (DFE) sub-model, which represents macroscopic information of the coal-rock image. The texture information and macroscopic information are acquired based on information theory. Thirdly, the multi-scale feature vector is generated by fusing the multi-scale CLBP feature vector and deep feature vector. Finally, multi-scale feature vectors are input to the nearest neighbor classifier with the chi-square distance to realize coal-rock recognition. Experimental results show the coal-rock image recognition accuracy of the proposed MFFCRR model reaches 97.9167%, which increased by 2%–3% compared with state-of-the-art coal-rock recognition methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

21 pages, 5215 KiB  
Article
Time-Shift Multi-scale Weighted Permutation Entropy and GWO-SVM Based Fault Diagnosis Approach for Rolling Bearing
by Zhilin Dong, Jinde Zheng, Siqi Huang, Haiyang Pan and Qingyun Liu
Entropy 2019, 21(6), 621; https://doi.org/10.3390/e21060621 - 25 Jun 2019
Cited by 27 | Viewed by 3711
Abstract
Multi-scale permutation entropy (MPE) is an effective nonlinear dynamic approach for complexity measurement of time series and it has been widely applied to fault feature representation of rolling bearing. However, the coarse-grained time series in MPE becomes shorter and shorter with the increase [...] Read more.
Multi-scale permutation entropy (MPE) is an effective nonlinear dynamic approach for complexity measurement of time series and it has been widely applied to fault feature representation of rolling bearing. However, the coarse-grained time series in MPE becomes shorter and shorter with the increase of the scale factor, which causes an imprecise estimation of permutation entropy. In addition, the different amplitudes of the same patterns are not considered by the permutation entropy used in MPE. To solve these issues, the time-shift multi-scale weighted permutation entropy (TSMWPE) approach is proposed in this paper. The inadequate process of coarse-grained time series in MPE was optimized by using a time shift time series and the process of probability calculation that cannot fully consider the symbol mode is solved by introducing a weighting operation. The parameter selections of TSMWPE were studied by analyzing two different noise signals. The stability and robustness were also studied by comparing TSMWPE with TSMPE and MPE. Based on the advantages of TSMWPE, an intelligent fault diagnosis method for rolling bearing is proposed by combining it with gray wolf optimized support vector machine for fault classification. The proposed fault diagnostic method was applied to two cases of experimental data analysis of rolling bearing and the results show that it can diagnose the fault category and severity of rolling bearing accurately and the corresponding recognition rate is higher than the rate provided by the existing comparison methods. Full article
(This article belongs to the Special Issue Permutation Entropy: Theory and Applications)
Show Figures

Figure 1

11 pages, 1977 KiB  
Article
Ternary Logic of Motion to Resolve Kinematic Frictional Paradoxes
by Michael Nosonovsky and Alexander D. Breki
Entropy 2019, 21(6), 620; https://doi.org/10.3390/e21060620 - 24 Jun 2019
Cited by 7 | Viewed by 3656
Abstract
Paradoxes of dry friction were discovered by Painlevé in 1895 and caused a controversy on whether the Coulomb–Amontons laws of dry friction are compatible with the Newtonian mechanics of the rigid bodies. Various resolutions of the paradoxes have been suggested including the abandonment [...] Read more.
Paradoxes of dry friction were discovered by Painlevé in 1895 and caused a controversy on whether the Coulomb–Amontons laws of dry friction are compatible with the Newtonian mechanics of the rigid bodies. Various resolutions of the paradoxes have been suggested including the abandonment of the model of rigid bodies and modifications of the law of friction. For compliant (elastic) bodies, the Painlevé paradoxes may correspond to the friction-induced instabilities. Here we investigate another possibility to resolve the paradoxes: the introduction of the three-value logic. We interpret the three states of a frictional system as either rest-motion-paradox or as rest-stable motion-unstable motion depending on whether a rigid or compliant system is investigated. We further relate the ternary logic approach with the entropic stability criteria for a frictional system and with the study of ultraslow sliding friction (intermediate between the rest and motion or between stick and slip). Full article
(This article belongs to the Special Issue Entropic Methods in Surface Science)
Show Figures

Figure 1

17 pages, 4859 KiB  
Article
Experimental Investigation of a 300 kW Organic Rankine Cycle Unit with Radial Turbine for Low-Grade Waste Heat Recovery
by Ruijie Wang, Guohua Kuang, Lei Zhu, Shucheng Wang and Jingquan Zhao
Entropy 2019, 21(6), 619; https://doi.org/10.3390/e21060619 - 23 Jun 2019
Cited by 8 | Viewed by 4595
Abstract
The performance of a 300 kW organic Rankine cycle (ORC) prototype was experimentally investigated for low-grade waste heat recovery in industry. The prototype employed a specially developed single-stage radial turbine that was integrated with a semi-hermetic three-phase asynchronous generator. R245fa was selected as [...] Read more.
The performance of a 300 kW organic Rankine cycle (ORC) prototype was experimentally investigated for low-grade waste heat recovery in industry. The prototype employed a specially developed single-stage radial turbine that was integrated with a semi-hermetic three-phase asynchronous generator. R245fa was selected as the working fluid and hot water was adopted to imitate the low-grade waste heat source. Under approximately constant cooling source operating conditions, variations of the ORC performance with diverse operating parameters of the heat source (including temperature and volume flow rate) were evaluated. Results revealed that the gross generating efficiency and electric power output could be improved by using a higher heat source temperature and volume flow rate. In the present experimental research, the maximum electric power output of 301 kW was achieved when the heat source temperature was 121 °C. The corresponding turbine isentropic efficiency and gross generating efficiency were up to 88.6% and 9.4%, respectively. Furthermore, the gross generating efficiency accounted for 40% of the ideal Carnot efficiency. The maximum electric power output yielded the optimum gross generating efficiency. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Show Figures

Figure 1

21 pages, 3737 KiB  
Article
Signatures of Quantum Mechanics in Chaotic Systems
by Kevin M. Short and Matthew A. Morena
Entropy 2019, 21(6), 618; https://doi.org/10.3390/e21060618 - 22 Jun 2019
Cited by 6 | Viewed by 3759
Abstract
We examine the quantum-classical correspondence from a classical perspective by discussing the potential for chaotic systems to support behaviors normally associated with quantum mechanical systems. Our main analytical tool is a chaotic system’s set of cupolets, which are highly-accurate stabilizations of its unstable [...] Read more.
We examine the quantum-classical correspondence from a classical perspective by discussing the potential for chaotic systems to support behaviors normally associated with quantum mechanical systems. Our main analytical tool is a chaotic system’s set of cupolets, which are highly-accurate stabilizations of its unstable periodic orbits. Our discussion is motivated by the bound or entangled states that we have recently detected between interacting chaotic systems, wherein pairs of cupolets are induced into a state of mutually-sustaining stabilization that can be maintained without external controls. This state is known as chaotic entanglement as it has been shown to exhibit several properties consistent with quantum entanglement. For instance, should the interaction be disturbed, the chaotic entanglement would then be broken. In this paper, we further describe chaotic entanglement and go on to address the capacity for chaotic systems to exhibit other characteristics that are conventionally associated with quantum mechanics, namely analogs to wave function collapse, various entropy definitions, the superposition of states, and the measurement problem. In doing so, we argue that these characteristics need not be regarded exclusively as quantum mechanical. We also discuss several characteristics of quantum systems that are not fully compatible with chaotic entanglement and that make quantum entanglement unique. Full article
(This article belongs to the Special Issue Quantum Chaos and Complexity)
Show Figures

Figure 1

15 pages, 897 KiB  
Article
User-Oriented Summaries Using a PSO Based Scoring Optimization Method
by Augusto Villa-Monte, Laura Lanzarini, Aurelio F. Bariviera and José A. Olivas
Entropy 2019, 21(6), 617; https://doi.org/10.3390/e21060617 - 22 Jun 2019
Cited by 7 | Viewed by 2847
Abstract
Automatic text summarization tools have a great impact on many fields, such as medicine, law, and scientific research in general. As information overload increases, automatic summaries allow handling the growing volume of documents, usually by assigning weights to the extracted phrases based on [...] Read more.
Automatic text summarization tools have a great impact on many fields, such as medicine, law, and scientific research in general. As information overload increases, automatic summaries allow handling the growing volume of documents, usually by assigning weights to the extracted phrases based on their significance in the expected summary. Obtaining the main contents of any given document in less time than it would take to do that manually is still an issue of interest. In this article, a new method is presented that allows automatically generating extractive summaries from documents by adequately weighting sentence scoring features using Particle Swarm Optimization. The key feature of the proposed method is the identification of those features that are closest to the criterion used by the individual when summarizing. The proposed method combines a binary representation and a continuous one, using an original variation of the technique developed by the authors of this paper. Our paper shows that using user labeled information in the training set helps to find better metrics and weights. The empirical results yield an improved accuracy compared to previous methods used in this field. Full article
(This article belongs to the Special Issue Unconventional Methods for Particle Swarm Optimization)
Show Figures

Figure 1

7 pages, 1841 KiB  
Article
Endemics and Cosmopolitans: Application of Statistical Mechanics to the Dry Forests of Mexico
by Michael G. Bowler and Colleen K. Kelly
Entropy 2019, 21(6), 616; https://doi.org/10.3390/e21060616 - 22 Jun 2019
Viewed by 2344
Abstract
Data on the seasonally dry tropical forests of Mexico have been examined in the light of statistical mechanics. The results suggest a division into two classes of species. There are drifting populations of a cosmopolitan class capable of existing in most dry forest [...] Read more.
Data on the seasonally dry tropical forests of Mexico have been examined in the light of statistical mechanics. The results suggest a division into two classes of species. There are drifting populations of a cosmopolitan class capable of existing in most dry forest sites; these have a statistical distribution previously only observed (globally) for populations of alien species. We infer that a high proportion of species found only at a single site are specialists, endemics, and that these prefer sites comparatively low in species richness. Full article
(This article belongs to the Special Issue Thermodynamics and Population Dynamics)
Show Figures

Figure 1

40 pages, 1870 KiB  
Article
Structural Characteristics of Two-Sender Index Coding
by Chandra Thapa, Lawrence Ong, Sarah J. Johnson and Min Li
Entropy 2019, 21(6), 615; https://doi.org/10.3390/e21060615 - 21 Jun 2019
Cited by 5 | Viewed by 3507
Abstract
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique [...] Read more.
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique message such that each message is requested by only one receiver. Index coding in this setup is called two-sender unicast index coding (TSUIC). The main goal is to find the shortest aggregate normalized codelength, which is expressed as the optimal broadcast rate. In this work, firstly, for a given TSUIC problem, we form three independent sub-problems each consisting of the only subset of the messages, based on whether the messages are available only in one of the senders or in both senders. Then, we express the optimal broadcast rate of the TSUIC problem as a function of the optimal broadcast rates of those independent sub-problems. In this way, we discover the structural characteristics of TSUIC. For the proofs of our results, we utilize confusion graphs and coding techniques used in single-sender index coding. To adapt the confusion graph technique in TSUIC, we introduce a new graph-coloring approach that is different from the normal graph coloring, which we call two-sender graph coloring, and propose a way of grouping the vertices to analyze the number of colors used. We further determine a class of TSUIC instances where a certain type of side-information can be removed without affecting their optimal broadcast rates. Finally, we generalize the results of a class of TSUIC problems to multiple senders. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Show Figures

Figure 1

14 pages, 1441 KiB  
Article
Changed Temporal Structure of Neuromuscular Control, Rather Than Changed Intersegment Coordination, Explains Altered Stabilographic Regularity after a Moderate Perturbation of the Postural Control System
by Felix Wachholz, Tove Kockum, Thomas Haid and Peter Federolf
Entropy 2019, 21(6), 614; https://doi.org/10.3390/e21060614 - 21 Jun 2019
Cited by 10 | Viewed by 4275
Abstract
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment [...] Read more.
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment movements (coordinative complexity; H2). The current study used rapid, voluntary head-shaking to perturb the postural control system, thus producing changes in COP regularity, to then assess the two hypotheses. Sixteen healthy participants (age 26.5 ± 3.5; seven females), whose postural movements were tracked via 39 reflective markers, performed trials in which they first stood quietly on a force plate for 30 s, then shook their head for 10 s, finally stood quietly for another 90 s. A principal component analysis (PCA) performed on the kinematic data extracted the main postural movement components. Temporal regularity was determined by calculating SaEn on the time series of these movement components. Coordinative complexity was determined by assessing the relative explained variance of the first five components. H1 was supported, but H2 was not. These results suggest that moderate perturbations of the postural control system produce altered temporal structures of the main postural movement components, but do not necessarily change the coordinative structure of intersegment movements. Full article
(This article belongs to the Section Complexity)
Show Figures

Graphical abstract

22 pages, 3032 KiB  
Article
Small Order Patterns in Big Time Series: A Practical Guide
by Christoph Bandt
Entropy 2019, 21(6), 613; https://doi.org/10.3390/e21060613 - 21 Jun 2019
Cited by 27 | Viewed by 4032
Abstract
The study of order patterns of three equally-spaced values x t , x t + d , x t + 2 d in a time series is a powerful tool. The lag d is changed in a wide range so that the differences [...] Read more.
The study of order patterns of three equally-spaced values x t , x t + d , x t + 2 d in a time series is a powerful tool. The lag d is changed in a wide range so that the differences of the frequencies of order patterns become autocorrelation functions. Similar to a spectrogram in speech analysis, four ordinal autocorrelation functions are used to visualize big data series, as for instance heart and brain activity over many hours. The method applies to real data without preprocessing, and outliers and missing data do not matter. On the theoretical side, we study the properties of order correlation functions and show that the four autocorrelation functions are orthogonal in a certain sense. An analysis of variance of a modified permutation entropy can be performed with four variance components associated with the functions. Full article
Show Figures

Figure 1

22 pages, 12183 KiB  
Article
Visual Analysis of Research Paper Collections Using Normalized Relative Compression
by Pere-Pau Vázquez
Entropy 2019, 21(6), 612; https://doi.org/10.3390/e21060612 - 21 Jun 2019
Cited by 1 | Viewed by 3512
Abstract
The analysis of research paper collections is an interesting topic that can give insights on whether a research area is stalled in the same problems, or there is a great amount of novelty every year. Previous research has addressed similar tasks by the [...] Read more.
The analysis of research paper collections is an interesting topic that can give insights on whether a research area is stalled in the same problems, or there is a great amount of novelty every year. Previous research has addressed similar tasks by the analysis of keywords or reference lists, with different degrees of human intervention. In this paper, we demonstrate how, with the use of Normalized Relative Compression, together with a set of automated data-processing tasks, we can successfully visually compare research articles and document collections. We also achieve very similar results with Normalized Conditional Compression that can be applied with a regular compressor. With our approach, we can group papers of different disciplines, analyze how a conference evolves throughout the different editions, or how the profile of a researcher changes through the time. We provide a set of tests that validate our technique, and show that it behaves better for these tasks than other techniques previously proposed. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Show Figures

Figure 1

22 pages, 995 KiB  
Article
An Improved Multi-Source Data Fusion Method Based on the Belief Entropy and Divergence Measure
by Zhe Wang and Fuyuan Xiao
Entropy 2019, 21(6), 611; https://doi.org/10.3390/e21060611 - 20 Jun 2019
Cited by 32 | Viewed by 3694
Abstract
Dempster–Shafer (DS) evidence theory is widely applied in multi-source data fusion technology. However, classical DS combination rule fails to deal with the situation when evidence is highly in conflict. To address this problem, a novel multi-source data fusion method is proposed in this [...] Read more.
Dempster–Shafer (DS) evidence theory is widely applied in multi-source data fusion technology. However, classical DS combination rule fails to deal with the situation when evidence is highly in conflict. To address this problem, a novel multi-source data fusion method is proposed in this paper. The main steps of the proposed method are presented as follows. Firstly, the credibility weight of each piece of evidence is obtained after transforming the belief Jenson–Shannon divergence into belief similarities. Next, the belief entropy of each piece of evidence is calculated and the information volume weights of evidence are generated. Then, both credibility weights and information volume weights of evidence are unified to generate the final weight of each piece of evidence before the weighted average evidence is calculated. Then, the classical DS combination rule is used multiple times on the modified evidence to generate the fusing results. A numerical example compares the fusing result of the proposed method with that of other existing combination rules. Further, a practical application of fault diagnosis is presented to illustrate the plausibility and efficiency of the proposed method. The experimental result shows that the targeted type of fault is recognized most accurately by the proposed method in comparing with other combination rules. Full article
Show Figures

Figure 1

16 pages, 1073 KiB  
Article
Kernel Methods for Nonlinear Connectivity Detection
by Lucas Massaroppe and Luiz A. Baccalá
Entropy 2019, 21(6), 610; https://doi.org/10.3390/e21060610 - 20 Jun 2019
Cited by 2 | Viewed by 2830
Abstract
In this paper, we show that the presence of nonlinear coupling between time series may be detected using kernel feature space F representations while dispensing with the need to go back to solve the pre-image problem to gauge model adequacy. This is done [...] Read more.
In this paper, we show that the presence of nonlinear coupling between time series may be detected using kernel feature space F representations while dispensing with the need to go back to solve the pre-image problem to gauge model adequacy. This is done by showing that the kernelized auto/cross sequences in F can be computed from the model rather than from prediction residuals in the original data space X . Furthermore, this allows for reducing the connectivity inference problem to that of fitting a consistent linear model in F that works even in the case of nonlinear interactions in the X -space which ordinary linear models may fail to capture. We further illustrate the fact that the resulting F -space parameter asymptotics provide reliable means of space model diagnostics in this space, and provide straightforward Granger connectivity inference tools even for relatively short time series records as opposed to other kernel based methods available in the literature. Full article
(This article belongs to the Special Issue Information Dynamics in Brain and Physiological Networks)
Show Figures

Figure 1

21 pages, 7357 KiB  
Article
Recognition of Emotional States Using Multiscale Information Analysis of High Frequency EEG Oscillations
by Zhilin Gao, Xingran Cui, Wang Wan and Zhongze Gu
Entropy 2019, 21(6), 609; https://doi.org/10.3390/e21060609 - 20 Jun 2019
Cited by 26 | Viewed by 5279
Abstract
Exploring the manifestation of emotion in electroencephalogram (EEG) signals is helpful for improving the accuracy of emotion recognition. This paper introduced the novel features based on the multiscale information analysis (MIA) of EEG signals for distinguishing emotional states in four dimensions based on [...] Read more.
Exploring the manifestation of emotion in electroencephalogram (EEG) signals is helpful for improving the accuracy of emotion recognition. This paper introduced the novel features based on the multiscale information analysis (MIA) of EEG signals for distinguishing emotional states in four dimensions based on Russell’s circumplex model. The algorithms were applied to extract features on the DEAP database, which included multiscale EEG complexity index in the time domain, and ensemble empirical mode decomposition enhanced energy and fuzzy entropy in the frequency domain. The support vector machine and cross validation method were applied to assess classification accuracy. The classification performance of MIA methods (accuracy = 62.01%, precision = 62.03%, recall/sensitivity = 60.51%, and specificity = 82.80%) was much higher than classical methods (accuracy = 43.98%, precision = 43.81%, recall/sensitivity = 41.86%, and specificity = 70.50%), which extracted features contain similar energy based on a discrete wavelet transform, fractal dimension, and sample entropy. In this study, we found that emotion recognition is more associated with high frequency oscillations (51–100Hz) of EEG signals rather than low frequency oscillations (0.3–49Hz), and the significance of the frontal and temporal regions are higher than other regions. Such information has predictive power and may provide more insights into analyzing the multiscale information of high frequency oscillations in EEG signals. Full article
Show Figures

Figure 1

17 pages, 7692 KiB  
Article
Detection of Salient Crowd Motion Based on Repulsive Force Network and Direction Entropy
by Xuguang Zhang, Dujun Lin, Juan Zheng, Xianghong Tang, Yinfeng Fang and Hui Yu
Entropy 2019, 21(6), 608; https://doi.org/10.3390/e21060608 - 20 Jun 2019
Cited by 13 | Viewed by 3602
Abstract
This paper proposes a method for salient crowd motion detection based on direction entropy and a repulsive force network. This work focuses on how to effectively detect salient regions in crowd movement through calculating the crowd vector field and constructing the weighted network [...] Read more.
This paper proposes a method for salient crowd motion detection based on direction entropy and a repulsive force network. This work focuses on how to effectively detect salient regions in crowd movement through calculating the crowd vector field and constructing the weighted network using the repulsive force. The interaction force between two particles calculated by the repulsive force formula is used to determine the relationship between these two particles. The network node strength is used as a feature parameter to construct a two-dimensional feature matrix. Furthermore, the entropy of the velocity vector direction is calculated to describe the instability of the crowd movement. Finally, the feature matrix of the repulsive force network and direction entropy are integrated together to detect the salient crowd motion. Experimental results and comparison show that the proposed method can efficiently detect the salient crowd motion. Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

18 pages, 908 KiB  
Article
On the Properties of the Reaction Counts Chemical Master Equation
by Vikram Sunkara
Entropy 2019, 21(6), 607; https://doi.org/10.3390/e21060607 - 19 Jun 2019
Cited by 2 | Viewed by 3238
Abstract
The reaction counts chemical master equation (CME) is a high-dimensional variant of the classical population counts CME. In the reaction counts CME setting, we count the reactions which have fired over time rather than monitoring the population state over time. Since a reaction [...] Read more.
The reaction counts chemical master equation (CME) is a high-dimensional variant of the classical population counts CME. In the reaction counts CME setting, we count the reactions which have fired over time rather than monitoring the population state over time. Since a reaction either fires or not, the reaction counts CME transitions are only forward stepping. Typically there are more reactions in a system than species, this results in the reaction counts CME being higher in dimension, but simpler in dynamics. In this work, we revisit the reaction counts CME framework and its key theoretical results. Then we will extend the theory by exploiting the reactions counts’ forward stepping feature, by decomposing the state space into independent continuous-time Markov chains (CTMC). We extend the reaction counts CME theory to derive analytical forms and estimates for the CTMC decomposition of the CME. This new theory gives new insights into solving hitting times-, rare events-, and a priori domain construction problems. Full article
Show Figures

Figure 1

15 pages, 3192 KiB  
Article
Application of Second Law Analysis in Heat Exchanger Systems
by Seyed Ali Ashrafizadeh
Entropy 2019, 21(6), 606; https://doi.org/10.3390/e21060606 - 19 Jun 2019
Cited by 8 | Viewed by 3775
Abstract
In recent decades, the second law of thermodynamics has been commonly applied in analyzing heat exchangers. Many researchers believe that the minimization of entropy generation or exergy losses can be considered as an objective function in designing heat exchangers. Some other researchers, however, [...] Read more.
In recent decades, the second law of thermodynamics has been commonly applied in analyzing heat exchangers. Many researchers believe that the minimization of entropy generation or exergy losses can be considered as an objective function in designing heat exchangers. Some other researchers, however, not only reject the entropy generation minimization (EGM) philosophy, but also believe that entropy generation maximization is a real objective function in designing heat exchangers. Using driving forces and irreversibility relations, this study sought to get these two views closer to each other. Exergy loss relations were developed by sink–source modeling along the heat exchangers. In this case, two types of heat exchangers are introduced, known as “process” and “utility” heat exchangers. In order to propose an appropriate procedure, exergy losses were examined based on variables and degrees of freedom, and they were different in each category. The results showed that “EGM” philosophy could be applied only to utility heat exchangers. A mathematical model was also developed to calculate exergy losses and investigate the effects of various parameters. Moreover, the validity of the model was evaluated by some experimental data using a double-pipe heat exchanger. Both the process and utility heat exchangers were simulated during the experiments. After verifying the model, some case studies were conducted. The final results indicated that there was not a real minimum point for exergy losses (or entropy generation) with respect to the operational variables. However, a logic minimum point could be found for utility heat exchangers with regard to the constraints. Full article
Show Figures

Figure 1

21 pages, 3522 KiB  
Article
Entropy Measures as Descriptors to Identify Apneas in Rheoencephalographic Signals
by Carmen González, Erik Jensen, Pedro Gambús and Montserrat Vallverdú
Entropy 2019, 21(6), 605; https://doi.org/10.3390/e21060605 - 18 Jun 2019
Cited by 8 | Viewed by 3470
Abstract
Rheoencephalography (REG) is a simple and inexpensive technique that intends to monitor cerebral blood flow (CBF), but its ability to reflect CBF changes has not been extensively proved. Based on the hypothesis that alterations in CBF during apnea should be reflected in REG [...] Read more.
Rheoencephalography (REG) is a simple and inexpensive technique that intends to monitor cerebral blood flow (CBF), but its ability to reflect CBF changes has not been extensively proved. Based on the hypothesis that alterations in CBF during apnea should be reflected in REG signals under the form of increased complexity, several entropy metrics were assessed for REG analysis during apnea and resting periods in 16 healthy subjects: approximate entropy (ApEn), sample entropy (SampEn), fuzzy entropy (FuzzyEn), corrected conditional entropy (CCE) and Shannon entropy (SE). To compute these entropy metrics, a set of parameters must be defined a priori, such as, for example, the embedding dimension m, and the tolerance threshold r. A thorough analysis of the effects of parameter selection in the entropy metrics was performed, looking for the values optimizing differences between apnea and baseline signals. All entropy metrics, except SE, provided higher values for apnea periods (p-values < 0.025). FuzzyEn outperformed all other metrics, providing the lowest p-value (p = 0.0001), allowing to conclude that REG signals during apnea have higher complexity than in resting periods. Those findings suggest that REG signals reflect CBF changes provoked by apneas, even though further studies are needed to confirm this hypothesis. Full article
(This article belongs to the Special Issue Information Dynamics in Brain and Physiological Networks)
Show Figures

Figure 1

17 pages, 7213 KiB  
Article
Turbine Passage Design Methodology to Minimize Entropy Production—A Two-Step Optimization Strategy
by Paht Juangphanich, Cis De Maesschalck and Guillermo Paniagua
Entropy 2019, 21(6), 604; https://doi.org/10.3390/e21060604 - 18 Jun 2019
Cited by 9 | Viewed by 3912
Abstract
Rapid aerodynamic design and optimization is essential for the development of future turbomachinery. The objective of this work is to demonstrate a methodology from 1D mean-line-design to a full 3D aerodynamic optimization of the turbine stage using a parameterization strategy that requires few [...] Read more.
Rapid aerodynamic design and optimization is essential for the development of future turbomachinery. The objective of this work is to demonstrate a methodology from 1D mean-line-design to a full 3D aerodynamic optimization of the turbine stage using a parameterization strategy that requires few parameters. The methodology is tested by designing a highly loaded and efficient turbine for the Purdue Experimental Turbine Aerothermal Laboratory. This manuscript describes the entire design process including the 2D/3D parameterization strategy in detail. The objective of the design is to maximize the entropy definition of efficiency while simultaneously maximizing the stage loading. Optimal design trends are highlighted for both the stator and rotor for several turbine characteristics in terms of pitch-to-chord ratio as well as the blades metal and stagger angles. Additionally, a correction term is proposed for the Horlock efficiency equation to maximize the accuracy based on the measured blade kinetic losses. Finally, the design and performance of optimal profiles along the Pareto front are summarized, featuring the highest aerodynamic performance and stage loading. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics III)
Show Figures

Figure 1

24 pages, 6063 KiB  
Article
Machine Learning Techniques to Identify Antimicrobial Resistance in the Intensive Care Unit
by Sergio Martínez-Agüero, Inmaculada Mora-Jiménez, Jon Lérida-García, Joaquín Álvarez-Rodríguez and Cristina Soguero-Ruiz
Entropy 2019, 21(6), 603; https://doi.org/10.3390/e21060603 - 18 Jun 2019
Cited by 29 | Viewed by 7109
Abstract
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and [...] Read more.
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and economic burden for the patient, for the system, and for society in general. Because of the critical health status of patients in the intensive care unit (ICU), time is critical to identify bacteria and their resistance to antibiotics. Since common antibiotics resistance tests require between 24 and 48 h after the culture is collected, we propose to apply machine learning (ML) techniques to determine whether a bacterium will be resistant to different families of antimicrobials. For this purpose, clinical and demographic features from the patient, as well as data from cultures and antibiograms are considered. From a population point of view, we also show graphically the relationship between different bacteria and families of antimicrobials by performing correspondence analysis. Results of the ML techniques evidence non-linear relationships helping to identify antimicrobial resistance at the ICU, with performance dependent on the family of antimicrobials. A change in the trend of antimicrobial resistance is also evidenced. Full article
Show Figures

Figure 1

23 pages, 1570 KiB  
Article
Competitive Particle Swarm Optimization for Multi-Category Text Feature Selection
by Jaesung Lee, Jaegyun Park, Hae-Cheon Kim and Dae-Won Kim
Entropy 2019, 21(6), 602; https://doi.org/10.3390/e21060602 - 18 Jun 2019
Cited by 7 | Viewed by 3773
Abstract
Multi-label feature selection is an important task for text categorization. This is because it enables learning algorithms to focus on essential features that foreshadow relevant categories, thereby improving the accuracy of text categorization. Recent studies have considered the hybridization of evolutionary feature wrappers [...] Read more.
Multi-label feature selection is an important task for text categorization. This is because it enables learning algorithms to focus on essential features that foreshadow relevant categories, thereby improving the accuracy of text categorization. Recent studies have considered the hybridization of evolutionary feature wrappers and filters to enhance the evolutionary search process. However, the relative effectiveness of feature subset searches of evolutionary and feature filter operators has not been considered. This results in degenerated final feature subsets. In this paper, we propose a novel hybridization approach based on competition between the operators. This enables the proposed algorithm to apply each operator selectively and modify the feature subset according to its relative effectiveness, unlike conventional methods. The experimental results on 16 text datasets verify that the proposed method is superior to conventional methods. Full article
(This article belongs to the Special Issue Unconventional Methods for Particle Swarm Optimization)
Show Figures

Figure 1

10 pages, 965 KiB  
Article
On the Information Content of Coarse Data with Respect to the Particle Size Distribution of Complex Granular Media: Rationale Approach and Testing
by Carlos García-Gutiérrez, Miguel Ángel Martín and Yakov Pachepsky
Entropy 2019, 21(6), 601; https://doi.org/10.3390/e21060601 - 17 Jun 2019
Cited by 4 | Viewed by 2564
Abstract
The particle size distribution (PSD) of complex granular media is seen as a mathematical measure supported in the interval of grain sizes. A physical property characterizing granular products used in the Andreasen and Andersen model of 1930 is re-interpreted in Information Entropy terms [...] Read more.
The particle size distribution (PSD) of complex granular media is seen as a mathematical measure supported in the interval of grain sizes. A physical property characterizing granular products used in the Andreasen and Andersen model of 1930 is re-interpreted in Information Entropy terms leading to a differential information equation as a conceptual approach for the PSD. Under this approach, measured data which give a coarse description of the distribution may be seen as initial conditions for the proposed equation. A solution of the equation agrees with a selfsimilar measure directly postulated as a PSD model by Martín and Taguas almost 80 years later, thus both models appear to be linked. A variant of this last model, together with detailed soil PSD data of 70 soils are used to study the information content of limited experimental data formed by triplets and its ability in the PSD reconstruction. Results indicate that the information contained in certain soil triplets is sufficient to rebuild the whole PSD: for each soil sample tested there is always at least a triplet that contains enough information to simulate the whole distribution. Full article
Show Figures

Figure 1

10 pages, 690 KiB  
Article
Investigating the Randomness of Passengers’ Seating Behavior in Suburban Trains
by Jakob Schöttl, Michael J. Seitz and Gerta Köster
Entropy 2019, 21(6), 600; https://doi.org/10.3390/e21060600 - 17 Jun 2019
Cited by 5 | Viewed by 3150
Abstract
In pedestrian dynamics, individual-based models serve to simulate the behavior of crowds so that evacuation times and crowd densities can be estimated or the efficiency of public transportation optimized. Often, train systems are investigated where seat choice may have a great impact on [...] Read more.
In pedestrian dynamics, individual-based models serve to simulate the behavior of crowds so that evacuation times and crowd densities can be estimated or the efficiency of public transportation optimized. Often, train systems are investigated where seat choice may have a great impact on capacity utilization, especially when passengers get in each other’s way. Therefore, it is useful to reproduce passengers’ behavior inside trains. However, there is surprisingly little research on the subject. Do passengers distribute evenly as it is most often assumed in simulation models and as one would expect from a system that obeys the laws of thermodynamics? Conversely, is there a higher degree of order? To answer these questions, we collect data on seating behavior in Munich’s suburban trains and analyze it. Clear preferences are revealed that contradict the former assumption of a uniform distribution. We subsequently introduce a model that matches the probability distributions we observed. We demonstrate the applicability of our model and present a qualitative validation with a simulation example. The model’s implementation is part of the free and open-source Vadere simulation framework for pedestrian dynamics and thus available for further studies. The model can be used as one component in larger systems for the simulation of public transport. Full article
(This article belongs to the Special Issue Entropy and Information in Networks, from Societies to Cities)
Show Figures

Figure 1

13 pages, 530 KiB  
Article
On the Logic of a Prior Based Statistical Mechanics of Polydisperse Systems: The Case of Binary Mixtures
by Fabien Paillusson
Entropy 2019, 21(6), 599; https://doi.org/10.3390/e21060599 - 16 Jun 2019
Cited by 1 | Viewed by 3143
Abstract
Most undergraduate students who have followed a thermodynamics course would have been asked to evaluate the volume occupied by one mole of air under standard conditions of pressure and temperature. However, what is this task exactly referring to? If air is to be [...] Read more.
Most undergraduate students who have followed a thermodynamics course would have been asked to evaluate the volume occupied by one mole of air under standard conditions of pressure and temperature. However, what is this task exactly referring to? If air is to be regarded as a mixture, under what circumstances can this mixture be considered as comprising only one component called “air” in classical statistical mechanics? Furthermore, following the paradigmatic Gibbs’ mixing thought experiment, if one mixes air from a container with air from another container, all other things being equal, should there be a change in entropy? The present paper addresses these questions by developing a prior-based statistical mechanics framework to characterise binary mixtures’ composition realisations and their effect on thermodynamic free energies and entropies. It is found that (a) there exist circumstances for which an ideal binary mixture is thermodynamically equivalent to a single component ideal gas and (b) even when mixing two substances identical in their underlying composition, entropy increase does occur for finite size systems. The nature of the contributions to this increase is then discussed. Full article
(This article belongs to the Special Issue Applications of Statistical Thermodynamics)
Show Figures

Figure 1

12 pages, 2002 KiB  
Article
Enhanced Superdense Coding over Correlated Amplitude Damping Channel
by Yan-Ling Li, Dong-Mei Wei, Chuan-Jin Zu and Xing Xiao
Entropy 2019, 21(6), 598; https://doi.org/10.3390/e21060598 - 16 Jun 2019
Cited by 5 | Viewed by 2521
Abstract
Quantum channels with correlated effects are realistic scenarios for the study of noisy quantum communication when the channels are consecutively used. In this paper, superdense coding is reexamined under a correlated amplitude damping (CAD) channel. Two techniques named as weak measurement and environment-assisted [...] Read more.
Quantum channels with correlated effects are realistic scenarios for the study of noisy quantum communication when the channels are consecutively used. In this paper, superdense coding is reexamined under a correlated amplitude damping (CAD) channel. Two techniques named as weak measurement and environment-assisted measurement are utilized to enhance the capacity of superdense coding. The results show that both of them enable us to battle against the CAD decoherence and improve the capacity with a certain probability. Remarkably, the scheme of environment-assisted measurement always outperforms the scheme of weak measurement in both improving the capacity and successful probability. These notable superiorities could be attributed to the fact that environment-assisted measurement can extract additional information from the environment and thus it performs much better. Full article
(This article belongs to the Special Issue Open Quantum Systems (OQS) for Quantum Technologies)
Show Figures

Figure 1

21 pages, 1455 KiB  
Article
Analytical Solutions of Fractional-Order Heat and Wave Equations by the Natural Transform Decomposition Method
by Hassan Khan, Rasool Shah, Poom Kumam and Muhammad Arif
Entropy 2019, 21(6), 597; https://doi.org/10.3390/e21060597 - 16 Jun 2019
Cited by 50 | Viewed by 4385
Abstract
In the present article, fractional-order heat and wave equations are solved by using the natural transform decomposition method. The series form solutions are obtained for fractional-order heat and wave equations, using the proposed method. Some numerical examples are presented to understand the procedure [...] Read more.
In the present article, fractional-order heat and wave equations are solved by using the natural transform decomposition method. The series form solutions are obtained for fractional-order heat and wave equations, using the proposed method. Some numerical examples are presented to understand the procedure of natural transform decomposition method. The natural transform decomposition method procedure has shown that less volume of calculations and a high rate of convergence can be easily applied to other nonlinear problems. Therefore, the natural transform decomposition method is considered to be one of the best analytical techniques, in order to solve fractional-order linear and nonlinear Partial deferential equations, particularly fractional-order heat and wave equation. Full article
Show Figures

Figure 1

15 pages, 887 KiB  
Article
A Maximum Entropy Procedure to Solve Likelihood Equations
by Antonio Calcagnì, Livio Finos, Gianmarco Altoé and Massimiliano Pastore
Entropy 2019, 21(6), 596; https://doi.org/10.3390/e21060596 - 15 Jun 2019
Cited by 3 | Viewed by 3655
Abstract
In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy [...] Read more.
In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon’s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth’s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

10 pages, 291 KiB  
Article
Confidential Cooperative Communication with the Trust Degree of Jammer
by Mingxiong Zhao, Di Liu, Hui Gao and Wei Feng
Entropy 2019, 21(6), 595; https://doi.org/10.3390/e21060595 - 15 Jun 2019
Viewed by 2607
Abstract
In this paper, we consider the trust degree of a jammer, defined as the probability that the jammer cooperates to secure the legitimate transmission, and investigate its influence on confidential cooperative communication. According to the trust degree, we derive the closed-form optimal transmit [...] Read more.
In this paper, we consider the trust degree of a jammer, defined as the probability that the jammer cooperates to secure the legitimate transmission, and investigate its influence on confidential cooperative communication. According to the trust degree, we derive the closed-form optimal transmit signal-to-noise ratio (SNR) of the confidential message, ρ c , to maximize the expected secrecy rate, and further obtain the relationship between ρ c and the trust degree associated with the transmit SNR at the transmit user and channel gains. Simulation results demonstrate that the trust degree has a great effect on the transmit SNR of the confidential message and helps improve the performance of confidential cooperation in terms of the expected secrecy rate. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop