Next Issue
Volume 25, April
Previous Issue
Volume 25, February
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 25, Issue 3 (March 2023) – 156 articles

Cover Story (view full-size image): Unraveling the mysteries of neural networks, both natural and artificial, is essential for understanding human cognition and improving AI systems. Current information-theoretic methods provide insights into these networks but fail to conclusively identify functional modules. Here, we introduce a novel information-theoretic measure, relay information (IR), capable of pinpointing functional modules in artificial neural networks. Aided by a greedy search algorithm, IR significantly reduces the number of tests needed to identify the most informative neuron sets. Extensive examples showcase IR's ability to recover relevant functional nodes and demonstrate how perturbations affect predicted functionality, offering a breakthrough in understanding the inner workings of neural networks. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 7475 KiB  
Article
An Improved Deep Reinforcement Learning Method for Dispatch Optimization Strategy of Modern Power Systems
by Suwei Zhai, Wenyun Li, Zhenyu Qiu, Xinyi Zhang and Shixi Hou
Entropy 2023, 25(3), 546; https://doi.org/10.3390/e25030546 - 22 Mar 2023
Cited by 1 | Viewed by 1590
Abstract
As a promising information theory, reinforcement learning has gained much attention. This paper researches a wind-storage cooperative decision-making strategy based on dueling double deep Q-network (D3QN). Firstly, a new wind-storage cooperative model is proposed. Besides wind farms, energy storage systems, and external power [...] Read more.
As a promising information theory, reinforcement learning has gained much attention. This paper researches a wind-storage cooperative decision-making strategy based on dueling double deep Q-network (D3QN). Firstly, a new wind-storage cooperative model is proposed. Besides wind farms, energy storage systems, and external power grids, demand response loads are also considered, including residential price response loads and thermostatically controlled loads (TCLs). Then, a novel wind-storage cooperative decision-making mechanism is proposed, which combines the direct control of TCLs with the indirect control of residential price response loads. In addition, a kind of deep reinforcement learning algorithm called D3QN is utilized to solve the wind-storage cooperative decision-making problem. Finally, the numerical results verify the effectiveness of D3QN for optimizing the decision-making strategy of a wind-storage cooperation system. Full article
(This article belongs to the Topic Artificial Intelligence and Sustainable Energy Systems)
Show Figures

Figure 1

21 pages, 7374 KiB  
Article
Analysis of the Energy Loss and Performance Characteristics in a Centrifugal Pump Based on Sinusoidal Tubercle Volute Tongue
by Peifeng Lin, Chunhe Wang, Pengfei Song and Xiaojun Li
Entropy 2023, 25(3), 545; https://doi.org/10.3390/e25030545 - 22 Mar 2023
Cited by 2 | Viewed by 1160
Abstract
The energy loss inside a centrifugal pump has a significant effect on its performance characteristics. Based on the structural characteristics of the humpback pectoral fin, a new tongue was designed to improve the performance of the centrifugal pump. The influence of three sinusoidal [...] Read more.
The energy loss inside a centrifugal pump has a significant effect on its performance characteristics. Based on the structural characteristics of the humpback pectoral fin, a new tongue was designed to improve the performance of the centrifugal pump. The influence of three sinusoidal tubercle volute tongues (STVT) and one original volute tongue (OVT) on energy dissipation using the enstrophy analysis method was investigated. To accomplish this, the pressure fluctuations and performances of four centrifugal pumps were analyzed. The results indicate that enstrophy is primarily distributed at the impeller outlet and near the tongue. The total enstrophy of the profiles of STVT was smaller than that of the profiles of OVT. This difference was more obvious near the tongue. The reductions in the total enstrophy of the pumps were 8% (STVT−1), 8.2% (STVT−2), and 9% (STVT−3). The pressure fluctuations of the STVT profiles also decreased to different degrees. The average pressure fluctuations at the monitoring points decreased by 20.6% (STVT−1), 21.7% (STVT−2), and 23.3% (STVT−3). The performances of the bionic retrofit pumps increased by 1.5% (STVT−1), 2% (STVT−2), and 2.45% (STVT−3) under the design flow rate. This study guides the structural optimization of pumps. Full article
Show Figures

Figure 1

20 pages, 4334 KiB  
Article
Energy Dispatch for CCHP System in Summer Based on Deep Reinforcement Learning
by Wenzhong Gao and Yifan Lin
Entropy 2023, 25(3), 544; https://doi.org/10.3390/e25030544 - 21 Mar 2023
Cited by 2 | Viewed by 1243
Abstract
Combined cooling, heating, and power (CCHP) system is an effective solution to solve energy and environmental problems. However, due to the demand-side load uncertainty, load-prediction error, environmental change, and demand charge, the energy dispatch optimization of the CCHP system is definitely a tough [...] Read more.
Combined cooling, heating, and power (CCHP) system is an effective solution to solve energy and environmental problems. However, due to the demand-side load uncertainty, load-prediction error, environmental change, and demand charge, the energy dispatch optimization of the CCHP system is definitely a tough challenge. In view of this, this paper proposes a dispatch method based on the deep reinforcement learning (DRL) algorithm, DoubleDQN, to generate an optimal dispatch strategy for the CCHP system in the summer. By integrating DRL, this method does not require any prediction information, and can adapt to the load uncertainty. The simulation result shows that compared with strategies based on benchmark policies and DQN, the proposed dispatch strategy not only well preserves the thermal comfort, but also reduces the total intra-month cost by 0.13~31.32%, of which the demand charge is reduced by 2.19~46.57%. In addition, this method is proven to have the potential to be applied in the real world by testing under extended scenarios. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

17 pages, 11124 KiB  
Article
Blind Deconvolution Based on Correlation Spectral Negentropy for Bearing Fault
by Tian Tian, Gui-Ji Tang, Yin-Chu Tian and Xiao-Long Wang
Entropy 2023, 25(3), 543; https://doi.org/10.3390/e25030543 - 21 Mar 2023
Cited by 3 | Viewed by 1041
Abstract
Blind deconvolution is a method that can effectively improve the fault characteristics of rolling bearings. However, the existing blind deconvolution methods have shortcomings in practical applications. The minimum entropy deconvolution (MED) and the optimal minimum entropy deconvolution adjusted (OMEDA) are susceptible to extreme [...] Read more.
Blind deconvolution is a method that can effectively improve the fault characteristics of rolling bearings. However, the existing blind deconvolution methods have shortcomings in practical applications. The minimum entropy deconvolution (MED) and the optimal minimum entropy deconvolution adjusted (OMEDA) are susceptible to extreme values. Furthermore, maximum correlated kurtosis deconvolution (MCKD) and multipoint optimal minimum entropy deconvolution adjusted (MOMEDA) are required prior knowledge of faults. On the basis of the periodicity and impact of bearing fault signals, a new deconvolution algorithm, namely one based on maximum correlation spectral negentropy (CSNE), which adopts the particle swarm optimization (PSO) algorithm to solve the filter coefficients, is proposed in this paper. Verified by the simulated vibration model signal and the experimental simulation signal, the PSO–CSNE algorithm proposed in this paper overcomes the influence of harmonic signals and random pulse signals more effectively than other blind deconvolution algorithms when prior knowledge of the fault is unknown. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

39 pages, 8736 KiB  
Article
NaRnEA: An Information Theoretic Framework for Gene Set Analysis
by Aaron T. Griffin, Lukas J. Vlahos, Codruta Chiuzan and Andrea Califano
Entropy 2023, 25(3), 542; https://doi.org/10.3390/e25030542 - 21 Mar 2023
Cited by 1 | Viewed by 3015
Abstract
Gene sets are being increasingly leveraged to make high-level biological inferences from transcriptomic data; however, existing gene set analysis methods rely on overly conservative, heuristic approaches for quantifying the statistical significance of gene set enrichment. We created Nonparametric analytical-Rank-based Enrichment Analysis (NaRnEA) to [...] Read more.
Gene sets are being increasingly leveraged to make high-level biological inferences from transcriptomic data; however, existing gene set analysis methods rely on overly conservative, heuristic approaches for quantifying the statistical significance of gene set enrichment. We created Nonparametric analytical-Rank-based Enrichment Analysis (NaRnEA) to facilitate accurate and robust gene set analysis with an optimal null model derived using the information theoretic Principle of Maximum Entropy. By measuring the differential activity of ~2500 transcriptional regulatory proteins based on the differential expression of each protein’s transcriptional targets between primary tumors and normal tissue samples in three cohorts from The Cancer Genome Atlas (TCGA), we demonstrate that NaRnEA critically improves in two widely used gene set analysis methods: Gene Set Enrichment Analysis (GSEA) and analytical-Rank-based Enrichment Analysis (aREA). We show that the NaRnEA-inferred differential protein activity is significantly correlated with differential protein abundance inferred from independent, phenotype-matched mass spectrometry data in the Clinical Proteomic Tumor Analysis Consortium (CPTAC), confirming the statistical and biological accuracy of our approach. Additionally, our analysis crucially demonstrates that the sample-shuffling empirical null models leveraged by GSEA and aREA for gene set analysis are overly conservative, a shortcoming that is avoided by the newly developed Maximum Entropy analytical null model employed by NaRnEA. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

19 pages, 678 KiB  
Article
Dynamic Asset Allocation with Expected Shortfall via Quantum Annealing
by Hanjing Xu, Samudra Dasgupta, Alex Pothen and Arnab Banerjee
Entropy 2023, 25(3), 541; https://doi.org/10.3390/e25030541 - 21 Mar 2023
Cited by 1 | Viewed by 1716
Abstract
Recent advances in quantum hardware offer new approaches to solve various optimization problems that can be computationally expensive when classical algorithms are employed. We propose a hybrid quantum-classical algorithm to solve a dynamic asset allocation problem where a target return and a target [...] Read more.
Recent advances in quantum hardware offer new approaches to solve various optimization problems that can be computationally expensive when classical algorithms are employed. We propose a hybrid quantum-classical algorithm to solve a dynamic asset allocation problem where a target return and a target risk metric (expected shortfall) are specified. We propose an iterative algorithm that treats the target return as a constraint in a Markowitz portfolio optimization model, and dynamically adjusts the target return to satisfy the targeted expected shortfall. The Markowitz optimization is formulated as a Quadratic Unconstrained Binary Optimization (QUBO) problem. The use of the expected shortfall risk metric enables the modeling of extreme market events. We compare the results from D-Wave’s 2000Q and Advantage quantum annealers using real-world financial data. Both quantum annealers are able to generate portfolios with more than 80% of the return of the classical optimal solutions, while satisfying the expected shortfall. We observe that experiments on assets with higher correlations tend to perform better, which may help to design practical quantum applications in the near term. Full article
(This article belongs to the Special Issue Advances in Quantum Computing)
Show Figures

Figure 1

22 pages, 488 KiB  
Article
Quantum Computing Approaches for Vector Quantization—Current Perspectives and Developments
by Alexander Engelsberger and Thomas Villmann
Entropy 2023, 25(3), 540; https://doi.org/10.3390/e25030540 - 21 Mar 2023
Cited by 1 | Viewed by 1605
Abstract
In the field of machine learning, vector quantization is a category of low-complexity approaches that are nonetheless powerful for data representation and clustering or classification tasks. Vector quantization is based on the idea of representing a data or a class distribution using a [...] Read more.
In the field of machine learning, vector quantization is a category of low-complexity approaches that are nonetheless powerful for data representation and clustering or classification tasks. Vector quantization is based on the idea of representing a data or a class distribution using a small set of prototypes, and hence, it belongs to interpretable models in machine learning. Further, the low complexity of vector quantizers makes them interesting for the application of quantum concepts for their implementation. This is especially true for current and upcoming generations of quantum devices, which only allow the execution of simple and restricted algorithms. Motivated by different adaptation and optimization paradigms for vector quantizers, we provide an overview of respective existing quantum algorithms and routines to realize vector quantization concepts, maybe only partially, on quantum devices. Thus, the reader can infer the current state-of-the-art when considering quantum computing approaches for vector quantization. Full article
(This article belongs to the Special Issue Quantum Machine Learning 2022)
Show Figures

Figure 1

14 pages, 1668 KiB  
Article
More Stages Decrease Dissipation in Irreversible Step Processes
by Peter Salamon, Bjarne Andresen, James Nulton, Ty N. F. Roach and Forest Rohwer
Entropy 2023, 25(3), 539; https://doi.org/10.3390/e25030539 - 21 Mar 2023
Cited by 1 | Viewed by 898
Abstract
The dissipation in an irreversible step process is reduced when the number of steps is increased in any refinement of the steps in the process. This is a consequence of the ladder theorem, which states that, for any irreversible process proceeding by a [...] Read more.
The dissipation in an irreversible step process is reduced when the number of steps is increased in any refinement of the steps in the process. This is a consequence of the ladder theorem, which states that, for any irreversible process proceeding by a sequence of relaxations, dividing any relaxation step into two will result in a new sequence that is more efficient than the original one. This results in a more-steps-the-better rule, even when the new sequence of steps is not reoptimized. This superiority of many steps is well established empirically in, e.g., insulation and separation applications. In particular, the fact that the division of any step into two steps improves the overall efficiency has interesting implications for biological evolution and emphasizes thermodynamic length as a central measure for dissipation. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

13 pages, 1670 KiB  
Article
Quantization of Integrable and Chaotic Three-Particle Fermi–Pasta–Ulam–Tsingou Models
by Alio Issoufou Arzika, Andrea Solfanelli, Harald Schmid and Stefano Ruffo
Entropy 2023, 25(3), 538; https://doi.org/10.3390/e25030538 - 21 Mar 2023
Viewed by 1227
Abstract
We study the transition from integrability to chaos for the three-particle Fermi–Pasta–Ulam–Tsingou (FPUT) model. We can show that both the quartic β-FPUT model (α=0) and the cubic one (β=0) are integrable by introducing an [...] Read more.
We study the transition from integrability to chaos for the three-particle Fermi–Pasta–Ulam–Tsingou (FPUT) model. We can show that both the quartic β-FPUT model (α=0) and the cubic one (β=0) are integrable by introducing an appropriate Fourier representation to express the nonlinear terms of the Hamiltonian. For generic values of α and β, the model is non-integrable and displays a mixed phase space with both chaotic and regular trajectories. In the classical case, chaos is diagnosed by the investigation of Poincaré sections. In the quantum case, the level spacing statistics in the energy basis belongs to the Gaussian orthogonal ensemble in the chaotic regime, and crosses over to Poissonian behavior in the quasi-integrable low-energy limit. In the chaotic part of the spectrum, two generic observables obey the eigenstate thermalization hypothesis. Full article
Show Figures

Figure 1

16 pages, 389 KiB  
Article
Quantum Coding via Quasi-Cyclic Block Matrix
by Yuan Li and Jin-Yang Li
Entropy 2023, 25(3), 537; https://doi.org/10.3390/e25030537 - 21 Mar 2023
Viewed by 1107
Abstract
An effective construction method for long-length quantum code has important applications in the field based on large-scale data. With the rapid development of quantum computing, how to construct this class of quantum coding has become one of the key research fields in quantum [...] Read more.
An effective construction method for long-length quantum code has important applications in the field based on large-scale data. With the rapid development of quantum computing, how to construct this class of quantum coding has become one of the key research fields in quantum information theory. Motivated by the block jacket matrix and its circulant permutation, we proposed a construction method for quantum quasi-cyclic (QC) codes with two classical codes. This simplifies the coding process for long-length quantum error-correction code (QECC) using number decomposition. The obtained code length N can achieve O(n2) if an appropriate prime number n is taken. Furthermore, with a suitable parameter in the construction method, the obtained codes have four cycles in their generator matrices and show good performance for low density codes. Full article
(This article belongs to the Special Issue Quantum Communication and Quantum Key Distribution)
Show Figures

Figure 1

30 pages, 8168 KiB  
Article
Homogeneity Test of the First-Order Agreement Coefficient in a Stratified Design
by Mingrui Xu, Zhiming Li, Keyi Mou and Kalakani Mohammad Shuaib
Entropy 2023, 25(3), 536; https://doi.org/10.3390/e25030536 - 20 Mar 2023
Viewed by 1017
Abstract
Gwet’s first-order agreement coefficient (AC1) is widely used to assess the agreement between raters. This paper proposes several asymptotic statistics for a homogeneity test of stratified AC1 in large sample sizes. These statistics may have unsatisfactory performance, especially for small [...] Read more.
Gwet’s first-order agreement coefficient (AC1) is widely used to assess the agreement between raters. This paper proposes several asymptotic statistics for a homogeneity test of stratified AC1 in large sample sizes. These statistics may have unsatisfactory performance, especially for small samples and a high value of AC1. Furthermore, we propose three exact methods for small pieces. A likelihood ratio statistic is recommended in large sample sizes based on the numerical results. The exact E approaches under likelihood ratio and score statistics are more robust in the case of small sample scenarios. Moreover, the exact E method is effective to a high value of AC1. We apply two real examples to illustrate the proposed methods. Full article
Show Figures

Figure 1

23 pages, 1967 KiB  
Article
An Order Reduction Design Framework for Higher-Order Binary Markov Random Fields
by Zhuo Chen, Hongyu Yang and Yanli Liu
Entropy 2023, 25(3), 535; https://doi.org/10.3390/e25030535 - 20 Mar 2023
Viewed by 823
Abstract
The order reduction method is an important approach to optimize higher-order binary Markov random fields (HoMRFs), which are widely used in information theory, machine learning and image analysis. It transforms an HoMRF into an equivalent and easier reduced first-order binary Markov random field [...] Read more.
The order reduction method is an important approach to optimize higher-order binary Markov random fields (HoMRFs), which are widely used in information theory, machine learning and image analysis. It transforms an HoMRF into an equivalent and easier reduced first-order binary Markov random field (RMRF) by elaborately setting the coefficients and auxiliary variables of RMRF. However, designing order reduction methods is difficult, and no previous study has investigated this design issue. In this paper, we propose an order reduction design framework to study this problem for the first time. Through study, we find that the design difficulty mainly lies in that the coefficients and variables of RMRF must be set simultaneously. Therefore, the proposed framework decomposes the design difficulty into two processes, and each process mainly considers the coefficients or auxiliary variables of RMRF. Some valuable properties are also proven. Based on our framework, a new family of 14 order reduction methods is provided. Experiments, such as synthetic data and image denoising, demonstrate the superiority of our method. Full article
Show Figures

Figure 1

38 pages, 363 KiB  
Article
Explicit Expressions for Most Common Entropies
by Saralees Nadarajah and Malick Kebe
Entropy 2023, 25(3), 534; https://doi.org/10.3390/e25030534 - 20 Mar 2023
Viewed by 783
Abstract
Entropies are useful measures of variation. However, explicit expressions for entropies available in the literature are limited. In this paper, we provide a comprehensive collection of explicit expressions for four of the most common entropies for over sixty continuous univariate distributions. Most of [...] Read more.
Entropies are useful measures of variation. However, explicit expressions for entropies available in the literature are limited. In this paper, we provide a comprehensive collection of explicit expressions for four of the most common entropies for over sixty continuous univariate distributions. Most of the derived expressions are new. The explicit expressions involve known special functions. Full article
15 pages, 11032 KiB  
Article
FLoCIC: A Few Lines of Code for Raster Image Compression
by Borut Žalik, Damjan Strnad, Štefan Kohek, Ivana Kolingerová, Andrej Nerat, Niko Lukač, Bogdan Lipuš, Mitja Žalik and David Podgorelec
Entropy 2023, 25(3), 533; https://doi.org/10.3390/e25030533 - 20 Mar 2023
Cited by 1 | Viewed by 1255
Abstract
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to [...] Read more.
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to the original approach. It is determined that the JPEG LS predictor reduces the information entropy slightly better than the multi-functional approach. Furthermore, the interpolative coding was moderately more efficient than the most frequently used arithmetic coding. Finally, our compression pipeline is compared against JPEG LS, JPEG 2000 in the lossless mode, and PNG using 24 standard grayscale benchmark images. JPEG LS turned out to be the most efficient, followed by JPEG 2000, while our approach using simplified interpolative coding was moderately better than PNG. The implementation of the proposed encoder is extremely simple and can be performed in less than 60 lines of programming code for the coder and 60 lines for the decoder, which is demonstrated in the given pseudocodes. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

25 pages, 6256 KiB  
Review
Entropy and Cities: A Bibliographic Analysis towards More Circular and Sustainable Urban Environments
by Daniel R. Rondinel-Oviedo and Naomi Keena
Entropy 2023, 25(3), 532; https://doi.org/10.3390/e25030532 - 19 Mar 2023
Cited by 2 | Viewed by 2466
Abstract
Cities are critical to a sustainable future for our planet; still, the construction and operation of cities rely on intensive resource and energy use and transformation, leading to the generation of waste, effluents, and pollution, representing negative externalities outside and inside the city. [...] Read more.
Cities are critical to a sustainable future for our planet; still, the construction and operation of cities rely on intensive resource and energy use and transformation, leading to the generation of waste, effluents, and pollution, representing negative externalities outside and inside the city. Within every process, transformation implies the use of energy and the increase of entropy. In an urban system, the transformation of energy and materials will trigger the creation of entropic landscapes, mainly in the informal city and in unguarded natural landscapes, even hundreds of kilometers away, which generates substantial economic, social, and environmental impacts. In this sense, cities are significant contributors to the environmental crisis. Upstream, degradation of landscapes and ecosystems is frequent. Cities’ externalities and exogenous consumptions are directly linked with entropy and entropic landscapes, which are recognized as pollution (in the air, water, and land) or waste and in the degradation of natural ecosystems and communities. Through a systematic review of existing literature, this paper first outlines briefly how entropy has been applied in different disciplines and then focuses on presenting recent developments of how entropy has been defined, used, and characterized in urban studies concerning sustainability in cities and architecture, and presents a definition of the concept in relation to urban systems and key aspects to consider. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

23 pages, 693 KiB  
Article
A Hybrid Particle Swarm Optimization Algorithm with Dynamic Adjustment of Inertia Weight Based on a New Feature Selection Method to Optimize SVM Parameters
by Jing Wang, Xingyi Wang, Xiongfei Li and Jiacong Yi
Entropy 2023, 25(3), 531; https://doi.org/10.3390/e25030531 - 19 Mar 2023
Cited by 14 | Viewed by 1870
Abstract
Support vector machine (SVM) is a widely used and effective classifier. Its efficiency and accuracy mainly depend on the exceptional feature subset and optimal parameters. In this paper, a new feature selection method and an improved particle swarm optimization algorithm are proposed to [...] Read more.
Support vector machine (SVM) is a widely used and effective classifier. Its efficiency and accuracy mainly depend on the exceptional feature subset and optimal parameters. In this paper, a new feature selection method and an improved particle swarm optimization algorithm are proposed to improve the efficiency and the classification accuracy of the SVM. The new feature selection method, named Feature Selection-score (FS-score), performs well on data sets. If a feature makes the class external sparse and the class internal compact, its FS-score value will be larger and the probability of being selected will be greater. An improved particle swarm optimization model with dynamic adjustment of inertia weight (DWPSO-SVM) is also proposed to optimize the parameters of the SVM. By improving the calculation method of the inertia weight of the particle swarm optimization (PSO), inertia weight can decrease nonlinearly with the number of iterations increasing. In particular, the introduction of random function brings the inertia weight diversity in the later stage of the algorithm and the global searching ability of the algorithm to avoid falling into local extremum. The experiment is performed on the standard UCI data sets whose features are selected by the FS-score method. Experiments demonstrate that our algorithm achieves better classification performance compared with other state-of-the-art algorithms. Full article
(This article belongs to the Special Issue Information Theory and Swarm Optimization in Decision and Control)
Show Figures

Figure 1

9 pages, 544 KiB  
Opinion
What Is Heat? Can Heat Capacities Be Negative?
by Emil Roduner
Entropy 2023, 25(3), 530; https://doi.org/10.3390/e25030530 - 19 Mar 2023
Cited by 1 | Viewed by 1237
Abstract
In the absence of work, the exchange of heat of a sample of matter corresponds to the change of its internal energy, given by the kinetic energy of random translational motion of all its constituent atoms or molecules relative to the center of [...] Read more.
In the absence of work, the exchange of heat of a sample of matter corresponds to the change of its internal energy, given by the kinetic energy of random translational motion of all its constituent atoms or molecules relative to the center of mass of the sample, plus the excitation of quantum states, such as vibration and rotation, and the energy of electrons in excess to their ground state. If the sample of matter is equilibrated it is described by Boltzmann’s statistical thermodynamics and characterized by a temperature T. Monotonic motion such as that of the stars of an expanding universe is work against gravity and represents the exchange of kinetic and potential energy, as described by the virial theorem, but not an exchange of heat. Heat and work are two distinct properties of thermodynamic systems. Temperature is defined for the radiative cosmic background and for individual stars, but for the ensemble of moving stars neither temperature, nor pressure, nor heat capacities are properly defined, and the application of thermodynamics is, therefore, not advised. For equilibrated atomic nanoclusters, in contrast, one may talk about negative heat capacities when kinetic energy is transformed into potential energy of expanding bonds. Full article
(This article belongs to the Special Issue Thermodynamics of Matter in Wide Range of Entropies)
Show Figures

Figure 1

20 pages, 7593 KiB  
Article
An Innovative Possibilistic Fingerprint Quality Assessment (PFQA) Filter to Improve the Recognition Rate of a Level-2 AFIS
by Houda Khmila, Imene Khanfir Kallel, Eloi Bossé and Basel Solaiman
Entropy 2023, 25(3), 529; https://doi.org/10.3390/e25030529 - 19 Mar 2023
Cited by 1 | Viewed by 1451
Abstract
In this paper, we propose an innovative approach to improve the performance of an Automatic Fingerprint Identification System (AFIS). The method is based on the design of a Possibilistic Fingerprint Quality Assessment (PFQA) filter where ground truths of fingerprint images of effective and [...] Read more.
In this paper, we propose an innovative approach to improve the performance of an Automatic Fingerprint Identification System (AFIS). The method is based on the design of a Possibilistic Fingerprint Quality Assessment (PFQA) filter where ground truths of fingerprint images of effective and ineffective quality are built by learning. The first approach, QS_I, is based on the AFIS decision for the image without considering its paired image to decide its effectiveness or ineffectiveness. The second approach, QS_PI, is based on the AFIS decision when considering the pair (effective image, ineffective image). The two ground truths (effective/ineffective) are used to design the PFQA filter. PFQA discards the images for which the AFIS does not generate a correct decision. The proposed intervention does not affect how the AFIS works but ensures a selection of the input images, recognizing the most suitable ones to reach the AFIS’s highest recognition rate (RR). The performance of PFQA is evaluated on two experimental databases using two conventional AFIS, and a comparison is made with four current fingerprint image quality assessment (IQA) methods. The results show that an AFIS using PFQA can improve its RR by roughly 10% over an AFIS not using an IQA method. However, compared to other fingerprint IQA methods using the same AFIS, the RR improvement is more modest, in a 5–6% range. Full article
(This article belongs to the Special Issue Selected Featured Papers from Entropy Editorial Board Members)
Show Figures

Figure 1

13 pages, 459 KiB  
Article
Entropic Dynamics in a Theoretical Framework for Biosystems
by Richard L. Summers
Entropy 2023, 25(3), 528; https://doi.org/10.3390/e25030528 - 18 Mar 2023
Cited by 1 | Viewed by 1142
Abstract
Central to an understanding of the physical nature of biosystems is an apprehension of their ability to control entropy dynamics in their environment. To achieve ongoing stability and survival, living systems must adaptively respond to incoming information signals concerning matter and energy perturbations [...] Read more.
Central to an understanding of the physical nature of biosystems is an apprehension of their ability to control entropy dynamics in their environment. To achieve ongoing stability and survival, living systems must adaptively respond to incoming information signals concerning matter and energy perturbations in their biological continuum (biocontinuum). Entropy dynamics for the living system are then determined by the natural drive for reconciliation of these information divergences in the context of the constraints formed by the geometry of the biocontinuum information space. The configuration of this information geometry is determined by the inherent biological structure, processes and adaptive controls that are necessary for the stable functioning of the organism. The trajectory of this adaptive reconciliation process can be described by an information-theoretic formulation of the living system’s procedure for actionable knowledge acquisition that incorporates the axiomatic inference of the Kullback principle of minimum information discrimination (a derivative of Jaynes’ principle of maximal entropy). Utilizing relative information for entropic inference provides for the incorporation of a background of the adaptive constraints in biosystems within the operations of Fisher biologic replicator dynamics. This mathematical expression for entropic dynamics within the biocontinuum may then serve as a theoretical framework for the general analysis of biological phenomena. Full article
Show Figures

Figure 1

21 pages, 384 KiB  
Article
Application of the Esscher Transform to Pricing Forward Contracts on Energy Markets in a Fuzzy Environment
by Piotr Nowak and Michał Pawłowski
Entropy 2023, 25(3), 527; https://doi.org/10.3390/e25030527 - 18 Mar 2023
Cited by 2 | Viewed by 1025
Abstract
The paper is dedicated to modeling electricity spot prices and pricing forward contracts on energy markets. The underlying dynamics of electricity spot prices is governed by a stochastic mean reverting diffusion with jumps having mixed-exponential distribution. Application of financial mathematics and stochastic methods [...] Read more.
The paper is dedicated to modeling electricity spot prices and pricing forward contracts on energy markets. The underlying dynamics of electricity spot prices is governed by a stochastic mean reverting diffusion with jumps having mixed-exponential distribution. Application of financial mathematics and stochastic methods enabled the derivation of the analytical formula for the forward contract’s price in a crisp case. Since the model parameters’ incertitude is considered, their fuzzy counterparts are introduced. Utilization of fuzzy arithmetic enabled deriving an analytical expression for the futures price and proposing a modified method for decision-making under uncertainty. Finally, numerical examples are analyzed to illustrate our pricing approach and the proposed financial decision-making method. Full article
Show Figures

Figure 1

23 pages, 7452 KiB  
Article
An Explicit-Correction-Force Scheme of IB-LBM Based on Interpolated Particle Distribution Function
by Bowen Liu and Weiping Shi
Entropy 2023, 25(3), 526; https://doi.org/10.3390/e25030526 - 17 Mar 2023
Cited by 1 | Viewed by 1186
Abstract
In order to obtain a better numerical simulation method for fluid–structure interaction (FSI), the IB-LBM combining the lattice Boltzmann method (LBM) and immersed boundary method (IBM) has been studied more than a decade. For this purpose, an explicit correction force scheme of IB-LBM [...] Read more.
In order to obtain a better numerical simulation method for fluid–structure interaction (FSI), the IB-LBM combining the lattice Boltzmann method (LBM) and immersed boundary method (IBM) has been studied more than a decade. For this purpose, an explicit correction force scheme of IB-LBM was proposed in this paper. Different from the current IB-LBMs, this paper introduced the particle distribution function to the interpolation process from the fluid grids to the immersed boundary at the mesoscopic level and directly applied the LBM force models to obtain the interface force with a simple form and explicit process. Then, in order to ensure the mass conservation in the local area of the interface, this paper corrected the obtained interface force with the correction matrix, forming the total explicit-correction-force (ECP) scheme of IB-LBM. The results of four numerical tests were used to verify the order of accuracy and effectiveness of the present method. The streamline penetration is limited and the numerical simulation with certain application significance is successful for complex boundary conditions such as the movable rigid bodies (free oscillation of the flapping foil) and flexible deformable bodies (free deformation of cylinders). In summary, we obtained a simple and alternative simulation method that can achieve good simulation results for engineering reference models with complex boundary problems. Full article
Show Figures

Figure 1

13 pages, 313 KiB  
Article
Skew Constacyclic Codes over a Non-Chain Ring
by Mehmet Emin Köroğlu and Mustafa Sarı
Entropy 2023, 25(3), 525; https://doi.org/10.3390/e25030525 - 17 Mar 2023
Viewed by 1089
Abstract
In this paper, we investigate the algebraic structure of the non-local ring Rq=Fq[v]/v2+1 and identify the automorphisms of this ring to study the algebraic structure of the skew constacyclic [...] Read more.
In this paper, we investigate the algebraic structure of the non-local ring Rq=Fq[v]/v2+1 and identify the automorphisms of this ring to study the algebraic structure of the skew constacyclic codes and their duals over this ring. Furthermore, we give a necessary and sufficient condition for the skew constacyclic codes over Rq to be linear complementary dual (LCD). We present some examples of Euclidean LCD codes over Rq and tabulate the parameters of Euclidean LCD codes over finite fields as the Φ-images of these codes over Rq, which are almost maximum distance separable (MDS) and near MDS. Eventually, by making use of Hermitian linear complementary duals of skew constacyclic codes over Rq and the map Φ, we give a class of entanglement-assisted quantum error correcting codes (EAQECCs) with maximal entanglement and tabulate parameters of some EAQECCs with maximal entanglement over finite fields. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

32 pages, 3832 KiB  
Article
Quantile-Adaptive Sufficient Variable Screening by Controlling False Discovery
by Zihao Yuan, Jiaqing Chen, Han Qiu and Yangxin Huang
Entropy 2023, 25(3), 524; https://doi.org/10.3390/e25030524 - 17 Mar 2023
Viewed by 1021
Abstract
Sufficient variable screening rapidly reduces dimensionality with high probability in ultra-high dimensional modeling. To rapidly screen out the null predictors, a quantile-adaptive sufficient variable screening framework is developed by controlling the false discovery. Without any specification of an actual model, we first introduce [...] Read more.
Sufficient variable screening rapidly reduces dimensionality with high probability in ultra-high dimensional modeling. To rapidly screen out the null predictors, a quantile-adaptive sufficient variable screening framework is developed by controlling the false discovery. Without any specification of an actual model, we first introduce a compound testing procedure based on the conditionally imputing marginal rank correlation at different quantile levels of response to select active predictors in high dimensionality. The testing statistic can capture sufficient dependence through two paths: one is to control false discovery adaptively and the other is to control the false discovery rate by giving a prespecified threshold. It is computationally efficient and easy to implement. We establish the theoretical properties under mild conditions. Numerical studies including simulation studies and real data analysis contain supporting evidence that the proposal performs reasonably well in practical settings. Full article
(This article belongs to the Special Issue Entropy in Soft Computing and Machine Learning Algorithms II)
Show Figures

Figure A1

26 pages, 50883 KiB  
Article
Remote Sensing Image of The Landsat 8–9 Compressive Sensing via Non-Local Low-Rank Regularization with the Laplace Function
by Guibing Li, Weidong Jin, Jiaqing Miao, Ying Tan, Yingling Li, Weixuan Zhang and Liang Li
Entropy 2023, 25(3), 523; https://doi.org/10.3390/e25030523 - 17 Mar 2023
Viewed by 880
Abstract
Utilizing low-rank prior data in compressed sensing (CS) schemes for Landsat 8–9 remote sensing images (RSIs) has recently received widespread attention. Nevertheless, most CS algorithms focus on the sparsity of an RSI and ignore its low-rank (LR) nature. Therefore, this paper proposes a [...] Read more.
Utilizing low-rank prior data in compressed sensing (CS) schemes for Landsat 8–9 remote sensing images (RSIs) has recently received widespread attention. Nevertheless, most CS algorithms focus on the sparsity of an RSI and ignore its low-rank (LR) nature. Therefore, this paper proposes a new CS reconstruction algorithm for Landsat 8–9 remote sensing images based on a non-local optimization framework (NLOF) that is combined with non-convex Laplace functions (NCLF) used for the low-rank approximation (LAA). Since the developed algorithm is based on an approximate low-rank model of the Laplace function, it can adaptively assign different weights to different singular values. Moreover, exploiting the structural sparsity (SS) and low-rank (LR) between the image patches enables the restored image to obtain better CS reconstruction results of Landsat 8–9 RSI than the existing models. For the proposed scheme, first, a CS reconstruction model is proposed using the non-local low-rank regularization (NLLRR) and variational framework. Then, the image patch grouping and Laplace function are used as regularization/penalty terms to constrain the CS reconstruction model. Finally, to effectively solve the rank minimization problem, the alternating direction multiplier method (ADMM) is used to solve the model. Extensive numerical experimental results demonstrate that the non-local variational framework (NLVF) combined with the low-rank approximate regularization (LRAR) method of non-convex Laplace function (NCLF) can obtain better reconstruction results than the more advanced image CS reconstruction algorithms. At the same time, the model preserves the details of Landsat 8–9 RSIs and the boundaries of the transition areas. Full article
(This article belongs to the Special Issue Information Theory and Nonlinear Signal Processing)
Show Figures

Figure 1

13 pages, 9555 KiB  
Article
Prediction of the Number of Cumulative Pulses Based on the Photon Statistical Entropy Evaluation in Photon-Counting LiDAR
by Mingwei Huang, Zijing Zhang, Longzhu Cen, Jiahuan Li, Jiaheng Xie and Yuan Zhao
Entropy 2023, 25(3), 522; https://doi.org/10.3390/e25030522 - 17 Mar 2023
Cited by 1 | Viewed by 1044
Abstract
Photon-counting LiDAR encounters interference from background noise in remote target detection, and the statistical detection of the accumulation of multiple pulses is necessary to eliminate the uncertainty of responses from the Geiger-mode avalanche photodiode (Gm-APD). The cumulative number of statistical detections is difficult [...] Read more.
Photon-counting LiDAR encounters interference from background noise in remote target detection, and the statistical detection of the accumulation of multiple pulses is necessary to eliminate the uncertainty of responses from the Geiger-mode avalanche photodiode (Gm-APD). The cumulative number of statistical detections is difficult to select due to the lack of effective evaluation of the influence of the background noise. In this work, a statistical detection signal evaluation method based on photon statistical entropy (PSE) is proposed by developing the detection process of the Gm-APD as an information transmission model. A prediction model for estimating the number of cumulative pulses required for high-accuracy ranging with the background noise is then established. The simulation analysis shows that the proposed PSE is more sensitive to the noise compared with the signal-to-noise ratio evaluation, and a minimum PSE exists to ensure all the range detections with background noise are close to the true range with a low and stable range error. The experiments demonstrate that the prediction model provides a reliable estimation of the number of required cumulative pulses in various noise conditions. With the estimated number of cumulative pulses, when the signal photons are less than 0.1 per pulse, the range accuracy of 4.1 cm and 5.3 cm are obtained under the background noise of 7.6 MHz and 5.1 MHz, respectively. Full article
Show Figures

Figure 1

21 pages, 337 KiB  
Article
Analyzing the Effect of Imputation on Classification Performance under MCAR and MAR Missing Mechanisms
by Philip Buczak, Jian-Jia Chen and Markus Pauly
Entropy 2023, 25(3), 521; https://doi.org/10.3390/e25030521 - 17 Mar 2023
Cited by 1 | Viewed by 1163
Abstract
Many datasets in statistical analyses contain missing values. As omitting observations containing missing entries may lead to information loss or greatly reduce the sample size, imputation is usually preferable. However, imputation can also introduce bias and impact the quality and validity of subsequent [...] Read more.
Many datasets in statistical analyses contain missing values. As omitting observations containing missing entries may lead to information loss or greatly reduce the sample size, imputation is usually preferable. However, imputation can also introduce bias and impact the quality and validity of subsequent analysis. Focusing on binary classification problems, we analyzed how missing value imputation under MCAR as well as MAR missingness with different missing patterns affects the predictive performance of subsequent classification. To this end, we compared imputation methods such as several MICE variants, missForest, Hot Deck as well as mean imputation with regard to the classification performance achieved with commonly used classifiers such as Random Forest, Extreme Gradient Boosting, Support Vector Machine and regularized logistic regression. Our simulation results showed that Random Forest based imputation (i.e., MICE Random Forest and missForest) performed particularly well in most scenarios studied. In addition to these two methods, simple mean imputation also proved to be useful, especially when many features (covariates) contained missing values. Full article
(This article belongs to the Special Issue Advances in Information Sciences and Applications)
Show Figures

Figure 1

26 pages, 2581 KiB  
Article
RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks
by Xiaoli Du, Hongwei Zeng, Shengbo Chen and Zhou Lei
Entropy 2023, 25(3), 520; https://doi.org/10.3390/e25030520 - 17 Mar 2023
Viewed by 1337
Abstract
Recurrent Neural Networks (RNNs) are applied in safety-critical fields such as autonomous driving, aircraft collision detection, and smart credit. They are highly susceptible to input perturbations, but little research on RNN-oriented testing techniques has been conducted, leaving a threat to a large number [...] Read more.
Recurrent Neural Networks (RNNs) are applied in safety-critical fields such as autonomous driving, aircraft collision detection, and smart credit. They are highly susceptible to input perturbations, but little research on RNN-oriented testing techniques has been conducted, leaving a threat to a large number of sequential application domains. To address these gaps, improve the test adequacy of RNNs, find more defects, and improve the performance of RNNs models and their robustness to input perturbations. We aim to propose a test coverage metric for the underlying structure of RNNs, which is used to guide the generation of test inputs to test RNNs. Although coverage metrics have been proposed for RNNs, such as the hidden state coverage in RNN-Test, they ignore the fact that the underlying structure of RNNs is still a fully connected neural network but with an additional “delayer” that records the network state at the time of data input. We use the contributions, i.e., the combination of the outputs of neurons and the weights they emit, as the minimum computational unit of RNNs to explore the finer-grained logical structure inside the recurrent cells. Compared to existing coverage metrics, our research covers the decision mechanism of RNNs in more detail and is more likely to generate more adversarial samples and discover more flaws in the model. In this paper, we redefine the contribution coverage metric applicable to Stacked LSTMs and Stacked GRUs by considering the joint effect of neurons and weights in the underlying structure of the neural network. We propose a new coverage metric, RNNCon, which can be used to guide the generation of adversarial test inputs. And we design and implement a test framework prototype RNNCon-Test. 2 datasets, 4 LSTM models, and 4 GRU models are used to verify the effectiveness of RNNCon-Test. Compared to the current state-of-the-art study RNN-Test, RNNCon can cover a deeper decision logic of RNNs. RNNCon-Test is not only effective in identifying defects in Deep Learning (DL) systems but also in improving the performance of the model if the adversarial inputs generated by RNNCon-Test are filtered and added to the training set to retrain the model. In the case where the accuracy of the model is already high, RNNCon-Test is still able to improve the accuracy of the model by up to 0.45%. Full article
(This article belongs to the Special Issue Information Security and Privacy: From IoT to IoV)
Show Figures

Figure 1

19 pages, 1288 KiB  
Article
On the Secure Performance of Intelligent Reflecting Surface-Assisted HARQ Systems
by Yue Wu, Kuanlin Mu, Kaiyu Duan, Shishu Yin and Hongwen Yang
Entropy 2023, 25(3), 519; https://doi.org/10.3390/e25030519 - 17 Mar 2023
Cited by 3 | Viewed by 1065
Abstract
This paper analyzes the physical layer security performance of hybrid automatic repeat request (HARQ) systems with the assistance of an intelligent reflecting surface (IRS) and aims to reveal the primary factors that enhance PLS. First, closed-form expressions for the connection outage probability (COP) [...] Read more.
This paper analyzes the physical layer security performance of hybrid automatic repeat request (HARQ) systems with the assistance of an intelligent reflecting surface (IRS) and aims to reveal the primary factors that enhance PLS. First, closed-form expressions for the connection outage probability (COP) and secrecy outage probability (SOP) in HARQ with chase combining (HARQ-CC) are acquired using the generalized-K (KG) distribution. Then, these two critical metrics are derived while adopting HARQ with incremental redundancy (HARQ-IR), resorting to the mixture gamma (MG) distribution and the Mellin transform. Diversity and coding gain are also addressed through an asymptotic analysis of the COP and SOP. Finally, an evaluation of the numerical results demonstrates that a greater gain in the main channel and the wiretap channel can be produced by increasing the number of meta-surfaces rather than increasing the maximum transmission number, except for the higher signal-to-noise (SNR) region of HARQ-IR where the latter is preferred. This finding provides a significant guidance for the joint configuration of IRS and HARQ to achieve secure communication. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

13 pages, 2617 KiB  
Article
Enhanced Efficiency at Maximum Power in a Fock–Darwin Model Quantum Dot Engine
by Francisco J. Peña, Nathan M. Myers, Daniel Órdenes, Francisco Albarrán-Arriagada and Patricio Vargas
Entropy 2023, 25(3), 518; https://doi.org/10.3390/e25030518 - 17 Mar 2023
Cited by 4 | Viewed by 1423
Abstract
We study the performance of an endoreversible magnetic Otto cycle with a working substance composed of a single quantum dot described using the well-known Fock–Darwin model. We find that tuning the intensity of the parabolic trap (geometrical confinement) impacts the proposed cycle’s performance, [...] Read more.
We study the performance of an endoreversible magnetic Otto cycle with a working substance composed of a single quantum dot described using the well-known Fock–Darwin model. We find that tuning the intensity of the parabolic trap (geometrical confinement) impacts the proposed cycle’s performance, quantified by the power, work, efficiency, and parameter region where the cycle operates as an engine. We demonstrate that a parameter region exists where the efficiency at maximum output power exceeds the Curzon–Ahlborn efficiency, the efficiency at maximum power achieved by a classical working substance. Full article
(This article belongs to the Special Issue Quantum Control and Quantum Computing)
Show Figures

Figure 1

14 pages, 5814 KiB  
Article
Necessary Condition of Self-Organisation in Nonextensive Open Systems
by Ozgur Afsar and Ugur Tirnakli
Entropy 2023, 25(3), 517; https://doi.org/10.3390/e25030517 - 17 Mar 2023
Viewed by 1063
Abstract
In this paper, we focus on evolution from an equilibrium state in a power law form by means of q-exponentials to an arbitrary one. Introducing new q-Gibbsian equalities as the necessary condition of self-organization in nonextensive open systems, we theoretically show [...] Read more.
In this paper, we focus on evolution from an equilibrium state in a power law form by means of q-exponentials to an arbitrary one. Introducing new q-Gibbsian equalities as the necessary condition of self-organization in nonextensive open systems, we theoretically show how to derive the connections between q-renormalized entropies (ΔS˜q) and q-relative entropies (KLq) in both Bregman and Csiszar forms after we clearly explain the connection between renormalized entropy by Klimantovich and relative entropy by Kullback-Leibler without using any predefined effective Hamiltonian. This function, in our treatment, spontaneously comes directly from the calculations. We also explain the difference between using ordinary and normalized q-expectations in mean energy calculations of the states. To verify the results numerically, we use a toy model of complexity, namely the logistic map defined as Xt+1=1aXt2, where a[0,2] is the map parameter. We measure the level of self-organization using two distinct forms of the q-renormalized entropy through period doublings and chaotic band mergings of the map as the number of periods/chaotic-bands increase/decrease. We associate the behaviour of the q-renormalized entropies with the emergence/disappearance of complex structures in the phase space as the control parameter of the map changes. Similar to Shiner-Davison-Landsberg (SDL) complexity, we categorize the tendencies of the q-renormalized entropies for the evaluation of the map for the whole control parameter space. Moreover, we show that any evolution between two states possesses a unique q=q* value (not a range for q values) for which the q-Gibbsian equalities hold and the values are the same for the Bregmann and Csiszar forms. Interestingly, if the evolution is from a=0 to a=ac1.4011, this unique q* value is found to be q*0.2445, which is the same value of qsensitivity given in the literature. Full article
(This article belongs to the Special Issue Non-additive Entropy Formulas: Motivation and Derivations)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop