Next Issue
Volume 24, September
Previous Issue
Volume 24, July
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 24, Issue 8 (August 2022) – 161 articles

Cover Story (view full-size image): In the article by Sabeti et al., a new information theoretic anomaly detector for time series is introduced. The method is based on detecting changes in the compressability of a test segment of the time series as measured by the difference between complexities of a typical encoder and a universal encoder. The typical and universal encoders are respectively implemented with a tree-structured pattern dictionary, trained on an earlier segment of the time series, and a Lempel–Ziv encoder. The anomaly detector is illustrated for a chaotic time series with model shift and for early detection of anomalous heart rates and skin temperatures of patients after exposure to a respiratory virus. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
8 pages, 249 KiB  
Article
Entropy of the Universe and Hierarchical Dark Matter
by Paul H. Frampton
Entropy 2022, 24(8), 1171; https://doi.org/10.3390/e24081171 - 22 Aug 2022
Cited by 2 | Viewed by 1371
Abstract
We discuss the relationship between dark matter and the entropy of the universe, with the premise that dark matter exists in the form of primordial black holes (PBHs) in a hierarchy of mass tiers. The lightest tier includes all PBHs with masses below [...] Read more.
We discuss the relationship between dark matter and the entropy of the universe, with the premise that dark matter exists in the form of primordial black holes (PBHs) in a hierarchy of mass tiers. The lightest tier includes all PBHs with masses below one hundred solar masses. The second-lightest tier comprises intermediate-mass PIMBHs within galaxies, including the Milky Way. Supermassive black holes at galactic centres are in the third tier. We are led to speculate that there exists a fourth tier of extremely massive PBHs, more massive than entire galaxies. We discuss future observations by the Rubin Observatory and the James Webb Space Telescope. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
16 pages, 349 KiB  
Article
CRC-Aided Adaptive BP Decoding of PAC Codes
by Xianwen Zhang, Ming Jiang, Mingyang Zhu, Kailin Liu and Chunming Zhao
Entropy 2022, 24(8), 1170; https://doi.org/10.3390/e24081170 - 22 Aug 2022
Cited by 2 | Viewed by 2139
Abstract
Although long polar codes with successive cancellation decoding can asymptotically achieve channel capacity, the performance of short blocklength polar codes is far from optimal. Recently, Arıkan proposed employing a convolutional pre-transformation before the polarization network, called polarization-adjusted convolutional (PAC) codes. In this paper, [...] Read more.
Although long polar codes with successive cancellation decoding can asymptotically achieve channel capacity, the performance of short blocklength polar codes is far from optimal. Recently, Arıkan proposed employing a convolutional pre-transformation before the polarization network, called polarization-adjusted convolutional (PAC) codes. In this paper, we focus on improving the performance of short PAC codes concatenated with a cyclic redundancy check (CRC) outer code, CRC-PAC codes, since error detection capability is essential in practical applications, such as the polar coding scheme for the control channel. We propose an enhanced adaptive belief propagation (ABP) decoding algorithm with the assistance of CRC bits for PAC codes. We also derive joint parity-check matrices of CRC-PAC codes suitable for iterative BP decoding. The proposed CRC-aided ABP (CA-ABP) decoding can effectively improve error performance when partial CRC bits are used in the decoding. Meanwhile, the error detection ability can still be guaranteed by the remaining CRC bits and adaptive decoding parameters. Moreover, compared with the conventional CRC-aided list (CA-List) decoding, our proposed scheme can significantly reduce computational complexity, to achieve a better trade-off between the performance and complexity for short PAC codes. Full article
(This article belongs to the Special Issue Information Theory and Coding for Wireless Communications)
Show Figures

Figure 1

19 pages, 1971 KiB  
Article
Optimal Maneuvering for Autonomous Vehicle Self-Localization
by John L. McGuire, Yee Wei Law, Kutluyıl Doğançay, Sook-Ying Ho and Javaan Chahl
Entropy 2022, 24(8), 1169; https://doi.org/10.3390/e24081169 - 22 Aug 2022
Cited by 3 | Viewed by 1614
Abstract
We consider the problem of optimal maneuvering, where an autonomous vehicle, an unmanned aerial vehicle (UAV) for example, must maneuver to maximize or minimize an objective function. We consider a vehicle navigating in a Global Navigation Satellite System (GNSS)-denied environment that self-localizes in [...] Read more.
We consider the problem of optimal maneuvering, where an autonomous vehicle, an unmanned aerial vehicle (UAV) for example, must maneuver to maximize or minimize an objective function. We consider a vehicle navigating in a Global Navigation Satellite System (GNSS)-denied environment that self-localizes in two dimensions using angle-of-arrival (AOA) measurements from stationary beacons at known locations. The objective of the vehicle is to travel along the path that minimizes its position and heading estimation error. This article presents an informative path planning (IPP) algorithm that (i) uses the determinant of the self-localization estimation error covariance matrix of an unscented Kalman filter as the objective function; (ii) applies an l-step look-ahead (LSLA) algorithm to determine the optimal heading for a constant-speed vehicle. The novel algorithm takes into account the kinematic constraints of the vehicle and the AOA means of measurement. We evaluate the performance of the algorithm in five scenarios involving stationary and mobile beacons and we find the estimation error approaches the lower bound for the estimator. The simulations show the vehicle maneuvers to locations that allow for minimum estimation uncertainty, even when beacon placement is not conducive to accurate estimation. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Their Applications)
Show Figures

Figure 1

24 pages, 4234 KiB  
Article
Detection of Static and Mobile Targets by an Autonomous Agent with Deep Q-Learning Abilities
by Barouch Matzliach, Irad Ben-Gal and Evgeny Kagan
Entropy 2022, 24(8), 1168; https://doi.org/10.3390/e24081168 - 22 Aug 2022
Cited by 4 | Viewed by 2091
Abstract
This paper addresses the problem of detecting multiple static and mobile targets by an autonomous mobile agent acting under uncertainty. It is assumed that the agent is able to detect targets at different distances and that the detection includes errors of the first [...] Read more.
This paper addresses the problem of detecting multiple static and mobile targets by an autonomous mobile agent acting under uncertainty. It is assumed that the agent is able to detect targets at different distances and that the detection includes errors of the first and second types. The goal of the agent is to plan and follow a trajectory that results in the detection of the targets in a minimal time. The suggested solution implements the approach of deep Q-learning applied to maximize the cumulative information gain regarding the targets’ locations and minimize the trajectory length on the map with a predefined detection probability. The Q-learning process is based on a neural network that receives the agent location and current probability map and results in the preferred move of the agent. The presented procedure is compared with the previously developed techniques of sequential decision making, and it is demonstrated that the suggested novel algorithm strongly outperforms the existing methods. Full article
Show Figures

Figure 1

12 pages, 387 KiB  
Article
Non-Reconciled Physical-Layer Keys-Assisted Secure Communication Scheme Based on Channel Correlation
by Meng Wang, Kaizhi Huang, Zheng Wan, Xiaoli Sun, Liang Jin and Kai Zhao
Entropy 2022, 24(8), 1167; https://doi.org/10.3390/e24081167 - 22 Aug 2022
Cited by 1 | Viewed by 1108
Abstract
Physical-layer key generation technology requires information reconciliation to correct channel estimation errors between two legitimate users. However, sending the reconciliation signals over the public channel increases the communication overhead and the risk of information leakage. Aiming at the problem, integrated secure communication schemes [...] Read more.
Physical-layer key generation technology requires information reconciliation to correct channel estimation errors between two legitimate users. However, sending the reconciliation signals over the public channel increases the communication overhead and the risk of information leakage. Aiming at the problem, integrated secure communication schemes using non-reconciled keys have attracted extensive attention. These schemes exploit channel coding to correct both inconsistent keys and transmission error bits. Meanwhile, more redundant code bits must be added to correct errors, which results in a lower secure transmission rate. To address the problem, we analyze the merit of channel correlation between non-reconciled key generation and secure transmission. Inspired by this, we propose a non-reconciled physical-layer keys-assisted secure communication scheme based on channel correlation. First of all, the signal frame is designed to make use of channel correlation between non-reconciled key generation and secure transmission. Based on the channel correlation, non-reconciled keys are then generated from the wireless channel to encrypt transmitted data. Moreover, an adaptive coding algorithm based on the equivalent channel is presented to encode the data bits before encryption, to guarantee reliable transmission. Finally, theoretical analysis and simulations demonstrate the significant performance of the proposed scheme in terms of low bit error ratio and high secure transmission rate. Full article
Show Figures

Figure 1

12 pages, 1613 KiB  
Article
Nonlinear Statistical Analysis of Normal and Pathological Infant Cry Signals in Cepstrum Domain by Multifractal Wavelet Leaders
by Salim Lahmiri, Chakib Tadj and Christian Gargour
Entropy 2022, 24(8), 1166; https://doi.org/10.3390/e24081166 - 22 Aug 2022
Cited by 8 | Viewed by 1232
Abstract
Multifractal behavior in the cepstrum representation of healthy and unhealthy infant cry signals is examined by means of wavelet leaders and compared using the Student t-test. The empirical results show that both expiration and inspiration signals exhibit clear evidence of multifractal properties [...] Read more.
Multifractal behavior in the cepstrum representation of healthy and unhealthy infant cry signals is examined by means of wavelet leaders and compared using the Student t-test. The empirical results show that both expiration and inspiration signals exhibit clear evidence of multifractal properties under healthy and unhealthy conditions. In addition, expiration and inspiration signals exhibit more complexity under healthy conditions than under unhealthy conditions. Furthermore, distributions of multifractal characteristics are different across healthy and unhealthy conditions. Hence, this study improves the understanding of infant crying by providing a complete description of its intrinsic dynamics to better evaluate its health status. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

24 pages, 4891 KiB  
Article
Frequency Stability Prediction of Power Systems Using Vision Transformer and Copula Entropy
by Peili Liu, Song Han, Na Rong and Junqiu Fan
Entropy 2022, 24(8), 1165; https://doi.org/10.3390/e24081165 - 21 Aug 2022
Cited by 3 | Viewed by 2022
Abstract
This paper addresses the problem of frequency stability prediction (FSP) following active power disturbances in power systems by proposing a vision transformer (ViT) method that predicts frequency stability in real time. The core idea of the FSP approach employing the ViT is to [...] Read more.
This paper addresses the problem of frequency stability prediction (FSP) following active power disturbances in power systems by proposing a vision transformer (ViT) method that predicts frequency stability in real time. The core idea of the FSP approach employing the ViT is to use the time-series data of power system operations as ViT inputs to perform FSP accurately and quickly so that operators can decide frequency control actions, minimizing the losses caused by incidents. Additionally, due to the high-dimensional and redundant input data of the power system and the O(N2) computational complexity of the transformer, feature selection based on copula entropy (CE) is used to construct image-like data with fixed dimensions from power system operation data and remove redundant information. Moreover, no previous FSP study has taken safety margins into consideration, which may threaten the secure operation of power systems. Therefore, a frequency security index (FSI) is used to form the sample labels, which are categorized as “insecurity”, “relative security”, and “absolute security”. Finally, various case studies are carried out on a modified New England 39-bus system and a modified ACTIVSg500 system for projected 0% to 40% nonsynchronous system penetration levels. The simulation results demonstrate that the proposed method achieves state-of-the-art (SOTA) performance on normal, noisy, and incomplete datasets in comparison with eight machine-learning methods. Full article
Show Figures

Graphical abstract

25 pages, 1145 KiB  
Article
Multi-Source Information Fusion Based on Negation of Reconstructed Basic Probability Assignment with Padded Gaussian Distribution and Belief Entropy
by Yujie Chen, Zexi Hua, Yongchuan Tang and Baoxin Li
Entropy 2022, 24(8), 1164; https://doi.org/10.3390/e24081164 - 21 Aug 2022
Viewed by 1623
Abstract
Multi-source information fusion is widely used because of its similarity to practical engineering situations. With the development of science and technology, the sources of information collected under engineering projects and scientific research are more diverse. To extract helpful information from multi-source information, in [...] Read more.
Multi-source information fusion is widely used because of its similarity to practical engineering situations. With the development of science and technology, the sources of information collected under engineering projects and scientific research are more diverse. To extract helpful information from multi-source information, in this paper, we propose a multi-source information fusion method based on the Dempster-Shafer (DS) evidence theory with the negation of reconstructed basic probability assignments (nrBPA). To determine the initial basic probability assignment (BPA), the Gaussian distribution BPA functions with padding terms are used. After that, nrBPAs are determined by two processes, reassigning the high blur degree BPA and transforming them into the form of negation. In addition, evidence of preliminary fusion is obtained using the entropy weight method based on the improved belief entropy of nrBPAs. The final fusion results are calculated from the preliminary fused evidence through the Dempster’s combination rule. In the experimental section, the UCI iris data set and the wine data set are used for validating the arithmetic processes of the proposed method. In the comparative analysis, the effectiveness of the BPA determination using a padded Gaussian function is verified by discussing the classification task with the iris data set. Subsequently, the comparison with other methods using the cross-validation method proves that the proposed method is robust. Notably, the classification accuracy of the iris data set using the proposed method can reach an accuracy of 97.04%, which is higher than many other methods. Full article
Show Figures

Figure 1

12 pages, 626 KiB  
Article
H-Theorem in an Isolated Quantum Harmonic Oscillator
by Che-Hsiu Hsueh, Chi-Ho Cheng, Tzyy-Leng Horng and Wen-Chin Wu
Entropy 2022, 24(8), 1163; https://doi.org/10.3390/e24081163 - 20 Aug 2022
Viewed by 1794
Abstract
We consider the H-theorem in an isolated quantum harmonic oscillator through the time-dependent Schrödinger equation. The effect of potential in producing entropy is investigated in detail, and we found that including a barrier potential into a harmonic trap would lead to the thermalization [...] Read more.
We consider the H-theorem in an isolated quantum harmonic oscillator through the time-dependent Schrödinger equation. The effect of potential in producing entropy is investigated in detail, and we found that including a barrier potential into a harmonic trap would lead to the thermalization of the system, while a harmonic trap alone would not thermalize the system. During thermalization, Shannon entropy increases, which shows that a microscopic quantum system still obeys the macroscopic thermodynamics law. Meanwhile, initial coherent mechanical energy transforms to incoherent thermal energy during thermalization, which exhibiting the decoherence of an oscillating wave packet featured by a large decreasing of autocorrelation length. When reaching thermal equilibrium, the wave packet comes to a halt, with the density distributions both in position and momentum spaces well-fitted by a microcanonical ensemble of statistical mechanics. Full article
(This article belongs to the Special Issue Ultracold Gases and Thermodynamics)
Show Figures

Figure 1

10 pages, 528 KiB  
Article
Environment-Assisted Modulation of Heat Flux in a Bio-Inspired System Based on Collision Model
by Ali Pedram, Barış Çakmak and Özgür E. Müstecaplıoğlu
Entropy 2022, 24(8), 1162; https://doi.org/10.3390/e24081162 - 20 Aug 2022
Cited by 2 | Viewed by 1709
Abstract
The high energy transfer efficiency of photosynthetic complexes has been a topic of research across many disciplines. Several attempts have been made in order to explain this energy transfer enhancement in terms of quantum mechanical resources such as energetic and vibration coherence and [...] Read more.
The high energy transfer efficiency of photosynthetic complexes has been a topic of research across many disciplines. Several attempts have been made in order to explain this energy transfer enhancement in terms of quantum mechanical resources such as energetic and vibration coherence and constructive effects of environmental noise. The developments in this line of research have inspired various biomimetic works aiming to use the underlying mechanisms in biological light harvesting complexes for the improvement of synthetic systems. In this article, we explore the effect of an auxiliary hierarchically structured environment interacting with a system on the steady-state heat transport across the system. The cold and hot baths are modeled by a series of identically prepared qubits in their respective thermal states, and we use a collision model to simulate the open quantum dynamics of the system. We investigate the effects of system-environment, inter-environment couplings and coherence of the structured environment on the steady state heat flux and find that such a coupling enhances the energy transfer. Our calculations reveal that there exists a non-monotonic and non-trivial relationship between the steady-state heat flux and the mentioned parameters. Full article
(This article belongs to the Special Issue Quantum Collision Models)
Show Figures

Figure 1

20 pages, 894 KiB  
Article
Quantum Statistical Complexity Measure as a Signaling of Correlation Transitions
by André T. Cesário, Diego L. B. Ferreira, Tiago Debarba, Fernando Iemini, Thiago O. Maciel and Reinaldo O. Vianna
Entropy 2022, 24(8), 1161; https://doi.org/10.3390/e24081161 - 19 Aug 2022
Viewed by 1437
Abstract
We introduce a quantum version for the statistical complexity measure, in the context of quantum information theory, and use it as a signaling function of quantum order–disorder transitions. We discuss the possibility for such transitions to characterize interesting physical phenomena, as quantum phase [...] Read more.
We introduce a quantum version for the statistical complexity measure, in the context of quantum information theory, and use it as a signaling function of quantum order–disorder transitions. We discuss the possibility for such transitions to characterize interesting physical phenomena, as quantum phase transitions, or abrupt variations in correlation distributions. We apply our measure on two exactly solvable Hamiltonian models: the 1D-Quantum Ising Model (in the single-particle reduced state), and on Heisenberg XXZ spin-1/2 chain (in the two-particle reduced state). We analyze its behavior across quantum phase transitions for finite system sizes, as well as in the thermodynamic limit by using Bethe Ansatz technique. Full article
(This article belongs to the Special Issue Quantum Information Entropy in Physics)
Show Figures

Figure 1

16 pages, 2179 KiB  
Article
Introducing Robust Statistics in the Uncertainty Quantification of Nuclear Safeguards Measurements
by Andrea Cerasa
Entropy 2022, 24(8), 1160; https://doi.org/10.3390/e24081160 - 19 Aug 2022
Cited by 2 | Viewed by 1256
Abstract
The monitoring of nuclear safeguards measurements consists of verifying the coherence between the operator declarations and the corresponding inspector measurements on the same nuclear items. Significant deviations may be present in the data, as consequence of problems with the operator and/or inspector measurement [...] Read more.
The monitoring of nuclear safeguards measurements consists of verifying the coherence between the operator declarations and the corresponding inspector measurements on the same nuclear items. Significant deviations may be present in the data, as consequence of problems with the operator and/or inspector measurement systems. However, they could also be the result of data falsification. In both cases, quantitative analysis and statistical outcomes may be negatively affected by their presence unless robust statistical methods are used. This article aims to investigate the benefits deriving from the introduction of robust procedures in the nuclear safeguards context. In particular, we will introduce a robust estimator for the estimation of the uncertainty components of the measurement error model. The analysis will prove the capacity of robust procedures to limit the bias in simulated and empirical contexts to provide more sounding statistical outcomes. For these reasons, the introduction of robust procedures may represent a step forward in the still ongoing development of reliable uncertainty quantification methods for error variance estimation. Full article
(This article belongs to the Special Issue Robust Methods in Complex Scenarios and Data Visualization)
Show Figures

Figure 1

21 pages, 8919 KiB  
Article
Linear Full Decoupling, Velocity Correction Method for Unsteady Thermally Coupled Incompressible Magneto-Hydrodynamic Equations
by Zhe Zhang, Haiyan Su and Xinlong Feng
Entropy 2022, 24(8), 1159; https://doi.org/10.3390/e24081159 - 19 Aug 2022
Cited by 2 | Viewed by 1207
Abstract
We propose and analyze an effective decoupling algorithm for unsteady thermally coupled magneto-hydrodynamic equations in this paper. The proposed method is a first-order velocity correction projection algorithms in time marching, including standard velocity correction and rotation velocity correction, which can completely decouple all [...] Read more.
We propose and analyze an effective decoupling algorithm for unsteady thermally coupled magneto-hydrodynamic equations in this paper. The proposed method is a first-order velocity correction projection algorithms in time marching, including standard velocity correction and rotation velocity correction, which can completely decouple all variables in the model. Meanwhile, the schemes are not only linear and only need to solve a series of linear partial differential equations with constant coefficients at each time step, but also the standard velocity correction algorithm can produce the Neumann boundary condition for the pressure field, but the rotational velocity correction algorithm can produce the consistent boundary which improve the accuracy of the pressure field. Thus, improving our computational efficiency. Then, we give the energy stability of the algorithms and give a detailed proofs. The key idea to establish the stability results of the rotation velocity correction algorithm is to transform the rotation term into a telescopic symmetric form by means of the Gauge–Uzawa formula. Finally, numerical experiments show that the rotation velocity correction projection algorithm is efficient to solve the thermally coupled magneto-hydrodynamic equations. Full article
Show Figures

Figure 1

19 pages, 5671 KiB  
Article
Smartphone Camera Identification from Low-Mid Frequency DCT Coefficients of Dark Images
by Adriana Berdich and Bogdan Groza
Entropy 2022, 24(8), 1158; https://doi.org/10.3390/e24081158 - 19 Aug 2022
Cited by 2 | Viewed by 1914
Abstract
Camera sensor identification can have numerous forensics and authentication applications. In this work, we follow an identification methodology for smartphone camera sensors using properties of the Dark Signal Nonuniformity (DSNU) in the collected images. This requires taking dark pictures, which the users can [...] Read more.
Camera sensor identification can have numerous forensics and authentication applications. In this work, we follow an identification methodology for smartphone camera sensors using properties of the Dark Signal Nonuniformity (DSNU) in the collected images. This requires taking dark pictures, which the users can easily do by keeping the phone against their palm, and has already been proposed by various works. From such pictures, we extract low and mid frequency AC coefficients from the DCT (Discrete Cosine Transform) and classify the data with the help of machine learning techniques. Traditional algorithms such as KNN (K-Nearest Neighbor) give reasonable results in the classification, but we obtain the best results with a wide neural network, which, despite its simplicity, surpassed even a more complex network architecture that we tried. Our analysis showed that the blue channel provided the best separation, which is in contrast to previous works that have recommended the green channel for its higher encoding power. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

17 pages, 731 KiB  
Article
Financial Fraud Detection and Prediction in Listed Companies Using SMOTE and Machine Learning Algorithms
by Zhihong Zhao and Tongyuan Bai
Entropy 2022, 24(8), 1157; https://doi.org/10.3390/e24081157 - 19 Aug 2022
Cited by 4 | Viewed by 3341
Abstract
This paper proposes a new method that can identify and predict financial fraud among listed companies based on machine learning. We collected 18,060 transactions and 363 indicators of finance, including 362 financial variables and a class variable. Then, we eliminated 9 indicators which [...] Read more.
This paper proposes a new method that can identify and predict financial fraud among listed companies based on machine learning. We collected 18,060 transactions and 363 indicators of finance, including 362 financial variables and a class variable. Then, we eliminated 9 indicators which were not related to financial fraud and processed the missing values. After that, we extracted 13 indicators from 353 indicators which have a big impact on financial fraud based on multiple feature selection models and the frequency of occurrence of features in all algorithms. Then, we established five single classification models and three ensemble models for the prediction of financial fraud records of listed companies, including LR, RF, XGBOOST, SVM, and DT and ensemble models with a voting classifier. Finally, we chose the optimal single model from five machine learning algorithms and the best ensemble model among all hybrid models. In choosing the model parameter, optimal parameters were selected by using the grid search method and comparing several evaluation metrics of models. The results determined the accuracy of the optimal single model to be in a range from 97% to 99%, and that of the ensemble models as higher than 99%. This shows that the optimal ensemble model performs well and can efficiently predict and detect fraudulent activity of companies. Thus, a hybrid model which combines a logistic regression model with an XGBOOST model is the best among all models. In the future, it will not only be able to predict fraudulent behavior in company management but also reduce the burden of doing so. Full article
Show Figures

Figure 1

23 pages, 470 KiB  
Article
Entropy-Based Discovery of Summary Causal Graphs in Time Series
by Charles K. Assaad, Emilie Devijver and Eric Gaussier
Entropy 2022, 24(8), 1156; https://doi.org/10.3390/e24081156 - 19 Aug 2022
Cited by 4 | Viewed by 2324
Abstract
This study addresses the problem of learning a summary causal graph on time series with potentially different sampling rates. To do so, we first propose a new causal temporal mutual information measure for time series. We then show how this measure relates to [...] Read more.
This study addresses the problem of learning a summary causal graph on time series with potentially different sampling rates. To do so, we first propose a new causal temporal mutual information measure for time series. We then show how this measure relates to an entropy reduction principle that can be seen as a special case of the probability raising principle. We finally combine these two ingredients in PC-like and FCI-like algorithms to construct the summary causal graph. There algorithm are evaluated on several datasets, which shows both their efficacy and efficiency. Full article
(This article belongs to the Special Issue Applications of Entropy in Causality Analysis)
Show Figures

Figure 1

17 pages, 376 KiB  
Article
Lower Bounds on Multivariate Higher Order Derivatives of Differential Entropy
by Laigang Guo, Chun-Ming Yuan and Xiao-Shan Gao
Entropy 2022, 24(8), 1155; https://doi.org/10.3390/e24081155 - 19 Aug 2022
Cited by 1 | Viewed by 1308
Abstract
This paper studies the properties of the derivatives of differential entropy H(Xt) in Costa’s entropy power inequality. For real-valued random variables, Cheng and Geng conjectured that for m1, [...] Read more.
This paper studies the properties of the derivatives of differential entropy H(Xt) in Costa’s entropy power inequality. For real-valued random variables, Cheng and Geng conjectured that for m1, (1)m+1(dm/dtm)H(Xt)0, while McKean conjectured a stronger statement, whereby (1)m+1(dm/dtm)H(Xt)(1)m+1(dm/dtm)H(XGt). Here, we study the higher dimensional analogues of these conjectures. In particular, we study the veracity of the following two statements: C1(m,n):(1)m+1(dm/dtm)H(Xt)0, where n denotes that Xt is a random vector taking values in Rn, and similarly, C2(m,n):(1)m+1(dm/dtm)H(Xt)(1)m+1(dm/dtm)H(XGt)0. In this paper, we prove some new multivariate cases: C1(3,i),i=2,3,4. Motivated by our results, we further propose a weaker version of McKean’s conjecture C3(m,n):(1)m+1(dm/dtm)H(Xt)(1)m+11n(dm/dtm)H(XGt), which is implied by C2(m,n) and implies C1(m,n). We prove some multivariate cases of this conjecture under the log-concave condition: C3(3,i),i=2,3,4 and C3(4,2). A systematic procedure to prove Cl(m,n) is proposed based on symbolic computation and semidefinite programming, and all the new results mentioned above are explicitly and strictly proved using this procedure. Full article
23 pages, 1090 KiB  
Article
Relative Entropy of Distance Distribution Based Similarity Measure of Nodes in Weighted Graph Data
by Shihu Liu, Yingjie Liu, Chunsheng Yang and Li Deng
Entropy 2022, 24(8), 1154; https://doi.org/10.3390/e24081154 - 19 Aug 2022
Cited by 4 | Viewed by 1754
Abstract
Many similarity measure algorithms of nodes in weighted graph data have been proposed by employing the degree of nodes in recent years. Despite these algorithms obtaining great results, there may be still some limitations. For instance, the strength of nodes is ignored. Aiming [...] Read more.
Many similarity measure algorithms of nodes in weighted graph data have been proposed by employing the degree of nodes in recent years. Despite these algorithms obtaining great results, there may be still some limitations. For instance, the strength of nodes is ignored. Aiming at this issue, the relative entropy of the distance distribution based similarity measure of nodes is proposed in this paper. At first, the structural weights of nodes are given by integrating their degree and strength. Next, the distance between any two nodes is calculated with the help of their structural weights and the Euclidean distance formula to further obtain the distance distribution of each node. After that, the probability distribution of nodes is constructed by normalizing their distance distributions. Thus, the relative entropy can be applied to measure the difference between the probability distributions of the top d important nodes and all nodes in graph data. Finally, the similarity of two nodes can be measured in terms of this above-mentioned difference calculated by relative entropy. Experimental results demonstrate that the algorithm proposed by considering the strength of node in the relative entropy has great advantages in the most similar node mining and link prediction. Full article
(This article belongs to the Collection Graphs and Networks from an Algorithmic Information Perspective)
Show Figures

Figure 1

18 pages, 914 KiB  
Article
Modeling Dual-Drive Gantry Stages with Heavy-Load and Optimal Synchronous Controls with Force-Feed-Forward Decoupling
by Hanjun Xie and Qinruo Wang
Entropy 2022, 24(8), 1153; https://doi.org/10.3390/e24081153 - 19 Aug 2022
Cited by 1 | Viewed by 1730
Abstract
The application of precision dual-drive gantry stages in intelligent manufacturing is increasing. However, the loads of dual drive motors can be severely inconsistent due to the movement of heavy loads on the horizontal crossbeam, resulting in synchronization errors in the same direction movement [...] Read more.
The application of precision dual-drive gantry stages in intelligent manufacturing is increasing. However, the loads of dual drive motors can be severely inconsistent due to the movement of heavy loads on the horizontal crossbeam, resulting in synchronization errors in the same direction movement of dual-drive motors. This phenomenon affects the machining accuracy of the gantry stage and is an critical problem that should be immediately solved. A novel optimal synchronization control algorithm based on model decoupling is proposed to solve the problem. First, an accurate physical model is established to obtain the essential characteristics of the heavy-load dual-drive gantry stage in which the rigid-flexible coupling dynamic is considered. It includes the crossbeam’s linear motion and rotational motion of the non-constant moment of inertia. The established model is verified by using the actual system. By defining the virtual centroid of the crossbeam, the cross-coupling force between dual-drive motors is quantified. Then, the virtual-centroid-based Gantry Synchronization Linear Quadratic Regulator (GSLQR) optimal control and force-Feed-Forward (FF) decoupling control algorithm is proposed. The result of the comparative experiment shows the effectiveness and superiority of the proposed algorithm. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 479 KiB  
Article
Stochastic Model of Block Segmentation Based on Improper Quadtree and Optimal Code under the Bayes Criterion
by Yuta Nakahara and Toshiyasu Matsushima
Entropy 2022, 24(8), 1152; https://doi.org/10.3390/e24081152 - 19 Aug 2022
Viewed by 1200
Abstract
Most previous studies on lossless image compression have focused on improving preprocessing functions to reduce the redundancy of pixel values in real images. However, we assumed stochastic generative models directly on pixel values and focused on achieving the theoretical limit of the assumed [...] Read more.
Most previous studies on lossless image compression have focused on improving preprocessing functions to reduce the redundancy of pixel values in real images. However, we assumed stochastic generative models directly on pixel values and focused on achieving the theoretical limit of the assumed models. In this study, we proposed a stochastic model based on improper quadtrees. We theoretically derive the optimal code for the proposed model under the Bayes criterion. In general, Bayes-optimal codes require an exponential order of calculation with respect to the data lengths. However, we propose an algorithm that takes a polynomial order of calculation without losing optimality by assuming a novel prior distribution. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

17 pages, 2500 KiB  
Article
Design of DNA Storage Coding with Enhanced Constraints
by Xiangjun Li, Shihua Zhou and Lewang Zou
Entropy 2022, 24(8), 1151; https://doi.org/10.3390/e24081151 - 19 Aug 2022
Cited by 4 | Viewed by 2072
Abstract
Traditional storage media have been gradually unable to meet the needs of data storage around the world, and one solution to this problem is DNA storage. However, it is easy to make errors in the subsequent sequencing reading process of DNA storage coding. [...] Read more.
Traditional storage media have been gradually unable to meet the needs of data storage around the world, and one solution to this problem is DNA storage. However, it is easy to make errors in the subsequent sequencing reading process of DNA storage coding. To reduces error rates, a method to enhance the robustness of the DNA storage coding set is proposed. Firstly, to reduce the likelihood of secondary structure in DNA coding sets, a repeat tandem sequence constraint is proposed. An improved DTW distance constraint is proposed to address the issue that the traditional distance constraint cannot accurately evaluate non-specific hybridization between DNA sequences. Secondly, an algorithm that combines random opposition-based learning and eddy jump strategy with Aquila Optimizer (AO) is proposed in this paper, which is called ROEAO. Finally, the ROEAO algorithm is used to construct the coding sets with traditional constraints and enhanced constraints, respectively. The quality of the two coding sets is evaluated by the test of the number of issuing card structures and the temperature stability of melting; the data show that the coding set constructed with ROEAO under enhanced constraints can obtain a larger lower bound while improving the coding quality. Full article
Show Figures

Figure 1

14 pages, 2952 KiB  
Article
Heat Transfer Analysis between R744 and HFOs inside Plate Heat Exchangers
by Anas F. A. Elbarghthi, Mohammad Yousef Hdaib and Václav Dvořák
Entropy 2022, 24(8), 1150; https://doi.org/10.3390/e24081150 - 18 Aug 2022
Cited by 2 | Viewed by 1619
Abstract
Plate heat exchangers (PHE) are used for a wide range of applications, thus utilizing new and unique heat sources is of crucial importance. R744 has a low critical temperature, which makes its thermophysical properties variation smoother than other supercritical fluids. As a result, [...] Read more.
Plate heat exchangers (PHE) are used for a wide range of applications, thus utilizing new and unique heat sources is of crucial importance. R744 has a low critical temperature, which makes its thermophysical properties variation smoother than other supercritical fluids. As a result, it can be used as a reliable hot stream for PHE, particularly at high temperatures. The local design approach was constructed via MATLAB integrated with the NIST database for real gases. Recently produced HFOs (R1234yf, R1234ze(E), R1234ze(Z), and R1233zd(E)) were utilized as cold fluids flowing through three phases: Liquid-phase, two-phase, and gas-phase. A two-step study was performed to examine the following parameters: Heat transfer coefficients, pressure drop, and effectiveness. In the first step, these parameters were analyzed with a variable number of plates to determine a suitable number for the next step. Then, the effects of hot stream pressure and cold stream superheating difference were investigated with variable cold channel mass fluxes. For the first step, the results showed insignificant differences in the investigated parameters for the number of plates higher than 40. Meanwhile, the second step showed that increasing the hot stream pressure from 10 to 12 MPa enhanced the two-phase convection coefficients by 17%, 23%, 75%, and 50% for R1234yf, R1234ze(E), R1234ze(Z), and R1233zd(E), respectively. In contrast, increasing the cold stream superheating temperature difference from 5 K to 20 reduced the two-phase convection coefficients by 14%, 16%, 53%, and 26% for R1234yf, R1234ze(E), R1234ze(Z), and R1233zd(E), respectively. Therefore, the R744 is suitable for PHE as a driving heat source, particularly at higher R744 inlet pressure and low cold stream superheating difference. Full article
(This article belongs to the Special Issue Entropy and Exergy Analysis in Ejector-Based Systems)
Show Figures

Figure 1

22 pages, 3794 KiB  
Article
Index Coding with Multiple Interpretations
by Valéria G. Pedrosa and Max H. M. Costa
Entropy 2022, 24(8), 1149; https://doi.org/10.3390/e24081149 - 18 Aug 2022
Viewed by 1313
Abstract
The index coding problem consists of a system with a server and multiple receivers with different side information and demand sets, connected by a noiseless broadcast channel. The server knows the side information available to the receivers. The objective is to design an [...] Read more.
The index coding problem consists of a system with a server and multiple receivers with different side information and demand sets, connected by a noiseless broadcast channel. The server knows the side information available to the receivers. The objective is to design an encoding scheme that enables all receivers to decode their demanded messages with a minimum number of transmissions, referred to as an index code length. The problem of finding the minimum length index code that enables all receivers to correct a specific number of errors has also been studied. This work establishes a connection between index coding and error-correcting codes with multiple interpretations from the tree construction of nested cyclic codes. The notion of multiple interpretations using nested codes is as follows: different data packets are independently encoded, and then combined by addition and transmitted as a single codeword, minimizing the number of channel uses and offering error protection. The resulting packet can be decoded and interpreted in different ways, increasing the error correction capability, depending on the amount of side information available at each receiver. Motivating applications are network downlink transmissions, information retrieval from datacenters, cache management, and sensor networks. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

22 pages, 5950 KiB  
Article
On the Spatial Distribution of Temporal Complexity in Resting State and Task Functional MRI
by Amir Omidvarnia, Raphaël Liégeois, Enrico Amico, Maria Giulia Preti, Andrew Zalesky and Dimitri Van De Ville
Entropy 2022, 24(8), 1148; https://doi.org/10.3390/e24081148 - 18 Aug 2022
Cited by 2 | Viewed by 2303
Abstract
Measuring the temporal complexity of functional MRI (fMRI) time series is one approach to assess how brain activity changes over time. In fact, hemodynamic response of the brain is known to exhibit critical behaviour at the edge between order and disorder. In this [...] Read more.
Measuring the temporal complexity of functional MRI (fMRI) time series is one approach to assess how brain activity changes over time. In fact, hemodynamic response of the brain is known to exhibit critical behaviour at the edge between order and disorder. In this study, we aimed to revisit the spatial distribution of temporal complexity in resting state and task fMRI of 100 unrelated subjects from the Human Connectome Project (HCP). First, we compared two common choices of complexity measures, i.e., Hurst exponent and multiscale entropy, and observed a high spatial similarity between them. Second, we considered four tasks in the HCP dataset (Language, Motor, Social, and Working Memory) and found high task-specific complexity, even when the task design was regressed out. For the significance thresholding of brain complexity maps, we used a statistical framework based on graph signal processing that incorporates the structural connectome to develop the null distributions of fMRI complexity. The results suggest that the frontoparietal, dorsal attention, visual, and default mode networks represent stronger complex behaviour than the rest of the brain, irrespective of the task engagement. In sum, the findings support the hypothesis of fMRI temporal complexity as a marker of cognition. Full article
(This article belongs to the Special Issue Spatiotemporal Complexity Analysis of Brain Function)
Show Figures

Figure 1

15 pages, 14251 KiB  
Article
An Internet-Oriented Multilayer Network Model Characterization and Robustness Analysis Method
by Yongheng Zhang, Yuliang Lu, Guozheng Yang, Dongdong Hou and Zhihao Luo
Entropy 2022, 24(8), 1147; https://doi.org/10.3390/e24081147 - 18 Aug 2022
Cited by 3 | Viewed by 1345
Abstract
The Internet creates multidimensional and complex relationships in terms of the composition, application and mapping of social users. Most of the previous related research has focused on the single-layer topology of physical device networks but ignored the study of service access relationships and [...] Read more.
The Internet creates multidimensional and complex relationships in terms of the composition, application and mapping of social users. Most of the previous related research has focused on the single-layer topology of physical device networks but ignored the study of service access relationships and the social structure of users on the Internet. Here, we propose a composite framework to understand how the interaction between the physical devices network, business application network, and user role network affects the robustness of the entire Internet. In this paper, a multilayer network consisting of a physical device layer, business application layer and user role layer is constructed by collecting experimental network data. We characterize the disturbance process of the entire multilayer network when a physical entity device fails by designing nodal disturbance to investigate the interactions that exist between the different network layers. Meanwhile, we analyze the characteristics of the Internet-oriented multilayer network structure and propose a heuristic multilayer network topology generation algorithm based on the initial routing topology and networking pattern, which simulates the evolution process of multilayer network topology. To further analyze the robustness of this multilayer network model, we combined a total of six target node ranking indicators including random strategy, degree centrality, betweenness centrality, closeness centrality, clustering coefficient and network constraint coefficient, performed node deletion simulations in the experimental network, and analyzed the impact of component types and interactions on the robustness of the overall multilayer network based on the maximum component change in the network. These results provide new insights into the operational processes of the Internet from a multi-domain data fusion perspective, reflecting that the coupling relationships that exist between the different interaction layers are closely linked to the robustness of multilayer networks. Full article
(This article belongs to the Special Issue Dynamics of Complex Networks)
Show Figures

Figure 1

12 pages, 322 KiB  
Article
Quantumness and Dequantumness Power of Quantum Channels
by Hongting Song and Nan Li
Entropy 2022, 24(8), 1146; https://doi.org/10.3390/e24081146 - 18 Aug 2022
Cited by 1 | Viewed by 1207
Abstract
Focusing on the dynamics of quantumness in ensembles, we propose a framework to qualitatively and quantitatively characterize quantum channels from the perspective of the amount of quantumness in ensembles that a quantum channel can induce or reduce. Along this line, the quantumness power [...] Read more.
Focusing on the dynamics of quantumness in ensembles, we propose a framework to qualitatively and quantitatively characterize quantum channels from the perspective of the amount of quantumness in ensembles that a quantum channel can induce or reduce. Along this line, the quantumness power and dequantumness power are introduced. In particular, once a quantum dynamics described by time-varying quantum channels reduces the quantumness for any input ensembles all the time, we call it a completely dequantumness channel, whose relationship with Markovianity is analyzed through several examples. Full article
(This article belongs to the Special Issue Quantum Information and Computation)
Show Figures

Figure 1

17 pages, 679 KiB  
Article
Efficient Privacy-Preserving K-Means Clustering from Secret-Sharing-Based Secure Three-Party Computation
by Weiming Wei, Chunming Tang and Yucheng Chen
Entropy 2022, 24(8), 1145; https://doi.org/10.3390/e24081145 - 18 Aug 2022
Cited by 5 | Viewed by 1984
Abstract
Privacy-preserving machine learning has become an important study at present due to privacy policies. However, the efficiency gap between the plain-text algorithm and its privacy-preserving version still exists. In this paper, we focus on designing a novel secret-sharing-based K-means clustering algorithm. Particularly, [...] Read more.
Privacy-preserving machine learning has become an important study at present due to privacy policies. However, the efficiency gap between the plain-text algorithm and its privacy-preserving version still exists. In this paper, we focus on designing a novel secret-sharing-based K-means clustering algorithm. Particularly, we present an efficient privacy-preserving K-means clustering algorithm based on replicated secret sharing with honest-majority in the semi-honest model. More concretely, the clustering task is outsourced to three semi-honest computing servers. Theoretically, the proposed privacy-preserving scheme can be proven with full data privacy. Furthermore, the experimental results demonstrate that our proposed privacy version reaches the same accuracy as the plain-text one. Compared to the existing privacy-preserving scheme, our proposed protocol can achieve about 16.5×–25.2× faster computation and 63.8×–68.0× lower communication. Consequently, the proposed privacy-preserving scheme is suitable for secret-sharing-based secure outsourced computation. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

22 pages, 357 KiB  
Article
Effective Gibbs State for Averaged Observables
by Alexander Evgen’evich Teretenkov
Entropy 2022, 24(8), 1144; https://doi.org/10.3390/e24081144 - 18 Aug 2022
Cited by 6 | Viewed by 1154
Abstract
We introduce the effective Gibbs state for the observables averaged with respect to fast free dynamics. We prove that the information loss due to the restriction of our measurement capabilities to such averaged observables is non-negative and discuss a thermodynamic role of it. [...] Read more.
We introduce the effective Gibbs state for the observables averaged with respect to fast free dynamics. We prove that the information loss due to the restriction of our measurement capabilities to such averaged observables is non-negative and discuss a thermodynamic role of it. We show that there are a lot of similarities between this effective Hamiltonian and the mean force Hamiltonian, which suggests a generalization of quantum thermodynamics including both cases. We also perturbatively calculate the effective Hamiltonian and correspondent corrections to the thermodynamic quantities and illustrate it with several examples. Full article
Show Figures

Figure 1

27 pages, 9519 KiB  
Article
Classical, Quantum and Event-by-Event Simulation of a Stern–Gerlach Experiment with Neutrons
by Hans De Raedt, Fengping Jin and Kristel Michielsen
Entropy 2022, 24(8), 1143; https://doi.org/10.3390/e24081143 - 17 Aug 2022
Cited by 2 | Viewed by 1727
Abstract
We present a comprehensive simulation study of the Newtonian and quantum model of a Stern–Gerlach experiment with cold neutrons. By solving Newton’s equation of motion and the time-dependent Pauli equation for a wide range of uniform magnetic field strengths, we scrutinize the role [...] Read more.
We present a comprehensive simulation study of the Newtonian and quantum model of a Stern–Gerlach experiment with cold neutrons. By solving Newton’s equation of motion and the time-dependent Pauli equation for a wide range of uniform magnetic field strengths, we scrutinize the role of the latter for drawing the conclusion that the magnetic moment of the neutron is quantized. We then demonstrate that a marginal modification of the Newtonian model suffices to construct, without invoking any concept of quantum theory, an event-based subquantum model that eliminates the shortcomings of the classical model and yields results that are in qualitative agreement with experiment and quantum theory. In this event-by-event model, the intrinsic angular momentum can take any value on the sphere, yet, for a sufficiently strong uniform magnetic field, the particle beam splits in two, exactly as in experiment and in concert with quantum theory. Full article
(This article belongs to the Special Issue Completeness of Quantum Theory: Still an Open Question)
Show Figures

Figure 1

20 pages, 362 KiB  
Article
Spectra of Self-Similar Measures
by Yong-Shen Cao, Qi-Rong Deng and Ming-Tian Li
Entropy 2022, 24(8), 1142; https://doi.org/10.3390/e24081142 - 17 Aug 2022
Viewed by 1329
Abstract
This paper is devoted to the characterization of spectrum candidates with a new tree structure to be the spectra of a spectral self-similar measure μN,D generated by the finite integer digit set D and the compression ratio N1 [...] Read more.
This paper is devoted to the characterization of spectrum candidates with a new tree structure to be the spectra of a spectral self-similar measure μN,D generated by the finite integer digit set D and the compression ratio N1. The tree structure is introduced with the language of symbolic space and widens the field of spectrum candidates. The spectrum candidate considered by Łaba and Wang is a set with a special tree structure. After showing a new criterion for the spectrum candidate with a tree structure to be a spectrum of μN,D, three sufficient and necessary conditions for the spectrum candidate with a tree structure to be a spectrum of μN,D were obtained. This result extends the conclusion of Łaba and Wang. As an application, an example of spectrum candidate Λ(N,B) with the tree structure associated with a self-similar measure is given. By our results, we obtain that Λ(N,B) is a spectrum of the self-similar measure. However, neither the method of Łaba and Wang nor that of Strichartz is applicable to the set Λ(N,B). Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications III)
Previous Issue
Back to TopTop