Next Issue
Volume 26, January
Previous Issue
Volume 25, November
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 25, Issue 12 (December 2023) – 106 articles

Cover Story (view full-size image): This month's cover focuses on Alsaghir and Bahk, who propose using all solid-state thermoelectric (TE) generators to convert medium-grade waste heat from the exhaust duct of a gas turbine power plant into electricity, thus improving the overall system efficiency. Detailed modeling and design optimization strategies are provided for such a TE system under realistic operation conditions, and conduct exergy analysis for the TE system. The proposed TE system can achieve a power output of ~10 KW with an efficiency of ~5 % from a gas turbine power plant's 2 m long, 0.5 × 0.5 m2 cross-sectional area exhaust duct.
View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
47 pages, 3301 KiB  
Article
Scaling Exponents of Time Series Data: A Machine Learning Approach
by Sebastian Raubitzek, Luiza Corpaci, Rebecca Hofer and Kevin Mallinger
Entropy 2023, 25(12), 1671; https://doi.org/10.3390/e25121671 (registering DOI) - 18 Dec 2023
Viewed by 1035
Abstract
In this study, we present a novel approach to estimating the Hurst exponent of time series data using a variety of machine learning algorithms. The Hurst exponent is a crucial parameter in characterizing long-range dependence in time series, and traditional methods such as [...] Read more.
In this study, we present a novel approach to estimating the Hurst exponent of time series data using a variety of machine learning algorithms. The Hurst exponent is a crucial parameter in characterizing long-range dependence in time series, and traditional methods such as Rescaled Range (R/S) analysis and Detrended Fluctuation Analysis (DFA) have been widely used for its estimation. However, these methods have certain limitations, which we sought to address by modifying the R/S approach to distinguish between fractional Lévy and fractional Brownian motion, and by demonstrating the inadequacy of DFA and similar methods for data that resembles fractional Lévy motion. This inspired us to utilize machine learning techniques to improve the estimation process. In an unprecedented step, we train various machine learning models, including LightGBM, MLP, and AdaBoost, on synthetic data generated from random walks, namely fractional Brownian motion and fractional Lévy motion, where the ground truth Hurst exponent is known. This means that we can initialize and create these stochastic processes with a scaling Hurst/scaling exponent, which is then used as the ground truth for training. Furthermore, we perform the continuous estimation of the scaling exponent directly from the time series, without resorting to the calculation of the power spectrum or other sophisticated preprocessing steps, as done in past approaches. Our experiments reveal that the machine learning-based estimators outperform traditional R/S analysis and DFA methods in estimating the Hurst exponent, particularly for data akin to fractional Lévy motion. Validating our approach on real-world financial data, we observe a divergence between the estimated Hurst/scaling exponents and results reported in the literature. Nevertheless, the confirmation provided by known ground truths reinforces the superiority of our approach in terms of accuracy. This work highlights the potential of machine learning algorithms for accurately estimating the Hurst exponent, paving new paths for time series analysis. By marrying traditional finance methods with the capabilities of machine learning, our study provides a novel contribution towards the future of time series data analysis. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

9 pages, 285 KiB  
Article
How Quantum Mechanics Requires Non-Additive Measures
by Gabriele Carcassi and Christine A. Aidala
Entropy 2023, 25(12), 1670; https://doi.org/10.3390/e25121670 - 18 Dec 2023
Viewed by 1125
Abstract
Measure theory is used in physics, not just to capture classical probability, but also to quantify the number of states. In previous works, we found that state quantification plays a foundational role in classical mechanics, and, therefore, we set ourselves to construct the [...] Read more.
Measure theory is used in physics, not just to capture classical probability, but also to quantify the number of states. In previous works, we found that state quantification plays a foundational role in classical mechanics, and, therefore, we set ourselves to construct the quantum equivalent of the Liouville measure. Unlike the classical counterpart, this quantized measure is non-additive and has a unitary lower bound (i.e., no set of states can have less than one state). Conversely, requiring that state quantification is finite for finite continuous regions and that each state counts as one already implies non-additivity, which in turn implies the failure of classical theory. In this article we show these preliminary results and outline a new line of inquiry that may provide a different insight into the foundations of quantum theory. Additionally, this new approach may prove to be useful to those interested in a quantized theory of space-time, as we believe this requires a quantized measure for the quantification of the independent degrees of freedom. Full article
12 pages, 287 KiB  
Article
Minimal Linear Codes Constructed from Sunflowers
by Xia Wu and Wei Lu
Entropy 2023, 25(12), 1669; https://doi.org/10.3390/e25121669 - 18 Dec 2023
Viewed by 957
Abstract
Sunflower in coding theory is a class of important subspace codes and can be used to construct linear codes. In this paper, we study the minimality of linear codes over Fq constructed from sunflowers of size s in all cases. For any [...] Read more.
Sunflower in coding theory is a class of important subspace codes and can be used to construct linear codes. In this paper, we study the minimality of linear codes over Fq constructed from sunflowers of size s in all cases. For any sunflower, the corresponding linear code is minimal if sq+1, and not minimal if 2s3q. In the case where 3<sq, for some sunflowers, the corresponding linear codes are minimal, whereas for some other sunflowers, the corresponding linear codes are not minimal. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
24 pages, 555 KiB  
Article
Capacity, Collision Avoidance and Shopping Rate under a Social Distancing Regime
by Haitian Zhong and David Sankoff
Entropy 2023, 25(12), 1668; https://doi.org/10.3390/e25121668 - 17 Dec 2023
Viewed by 886
Abstract
Capacity restrictions in stores, maintained by mechanisms like spacing customer intake, became familiar features of retailing in the time of the pandemic. Shopping rates in a crowded store under a social distancing regime are prone to considerable slowdown. Inspired by the random particle [...] Read more.
Capacity restrictions in stores, maintained by mechanisms like spacing customer intake, became familiar features of retailing in the time of the pandemic. Shopping rates in a crowded store under a social distancing regime are prone to considerable slowdown. Inspired by the random particle collision concepts of statistical mechanics, we introduce a dynamical model of the evolution of the shopping rate as a function of a given customer intake rate. The slowdown of each individual customer is incorporated as an additive term to the baseline value of the shopping time, proportionally to the number of other customers in the store. We determine analytically and via simulation the trajectory of the model as it approaches a Little’s law equilibrium and identify the point beyond which equilibrium cannot be achieved. By relating the customer shopping rate to the slowdown compared with the baseline, we can calculate the optimal intake rate leading to maximum equilibrium spending. This turns out to be the maximum rate compatible with equilibrium. The slowdown due to the largest possible number of shoppers is more than compensated for by the increased volume of shopping. This macroscopic model is validated by simulation experiments in which avoidance interactions between pairs of shoppers are responsible for shopping delays. Full article
Show Figures

Figure 1

16 pages, 562 KiB  
Article
Vector Approximate Message Passing Based OFDM-IM Detection for Underwater Acoustic Communications
by Xiao Feng, Feng Tian, Mingzhang Zhou, Haixin Sun and Zeyad A. H. Qasem
Entropy 2023, 25(12), 1667; https://doi.org/10.3390/e25121667 - 17 Dec 2023
Viewed by 874
Abstract
Orthogonal frequency division multiplexing with index modulation (OFDM-IM) has great potential for the implementation of high spectral-efficiency underwater acoustic (UWA) communications. However, general receivers consisting of the optimal maximum likelihood detection suffer from high computational load, which prohibits real-time data transmissions in underwater [...] Read more.
Orthogonal frequency division multiplexing with index modulation (OFDM-IM) has great potential for the implementation of high spectral-efficiency underwater acoustic (UWA) communications. However, general receivers consisting of the optimal maximum likelihood detection suffer from high computational load, which prohibits real-time data transmissions in underwater scenarios. In this paper, we propose a detection based on a vector approximate message passing (VAMP) algorithm for UWA OFDM-IM communications. Firstly, a VAMP framework with a non-loopy factor graph for index detection is formulated. Secondly, by utilizing the sparsity inherently existing in OFDM-IM symbols, a novel shrinkage function is derived based on the minimum mean square error criterion, which guarantees better posterior estimation. To reduce the errors from estimated non-existing indices, one trick is utilized to search the elements from the look-up table with the minimal Euclidean distance for the replacement of erroneously estimated indices. Experiments verify the advantages of the proposed detector in terms of low complexity, robustness and effectiveness compared with the state-of-art benchmarks. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 637 KiB  
Article
Ideal Agent System with Triplet States: Model Parameter Identification of Agent–Field Interaction
by Christoph J. Börner, Ingo Hoffmann and John H. Stiebel
Entropy 2023, 25(12), 1666; https://doi.org/10.3390/e25121666 - 16 Dec 2023
Cited by 1 | Viewed by 907
Abstract
On the capital market, price movements of stock corporations can be observed independent of overall market developments as a result of company-specific news, which suggests the occurrence of a sudden risk event. In recent years, numerous concepts from statistical physics have been transferred [...] Read more.
On the capital market, price movements of stock corporations can be observed independent of overall market developments as a result of company-specific news, which suggests the occurrence of a sudden risk event. In recent years, numerous concepts from statistical physics have been transferred to econometrics to model these effects and other issues, e.g., in socioeconomics. Like other studies, we extend the approaches based on the “buy” and “sell” positions of agents (investors’ stance) with a third “hold” position. We develop the corresponding theory within the framework of the microcanonical and canonical ensembles for an ideal agent system and apply it to a capital market example. We thereby design a procedure to estimate the required model parameters from time series on the capital market. The aim is the appropriate modeling and the one-step-ahead assessment of the effect of a sudden risk event. From a one-step-ahead performance comparison with selected benchmark approaches, we infer that the model is well-specified and the model parameters are well determined. Full article
(This article belongs to the Special Issue Complexity in Economics and Finance: New Directions and Challenges)
Show Figures

Figure 1

8 pages, 286 KiB  
Article
Ising Ladder with Four-Spin Plaquette Interaction in a Transverse Magnetic Field
by Maria Eugenia S. Nunes, Francisco Welington S. Lima and Joao A. Plascak
Entropy 2023, 25(12), 1665; https://doi.org/10.3390/e25121665 - 16 Dec 2023
Cited by 1 | Viewed by 813
Abstract
The spin-1/2 quantum transverse Ising model, defined on a ladder structure, with nearest-neighbor and four-spin interaction on a plaquette, was studied by using exact diagonalization on finite ladders together with finite-size-scaling procedures. The quantum phase transition between the ferromagnetic and paramagnetic phases has [...] Read more.
The spin-1/2 quantum transverse Ising model, defined on a ladder structure, with nearest-neighbor and four-spin interaction on a plaquette, was studied by using exact diagonalization on finite ladders together with finite-size-scaling procedures. The quantum phase transition between the ferromagnetic and paramagnetic phases has then been obtained by extrapolating the data to the thermodynamic limit. The critical transverse field decreases as the antiferromagnetic four-spin interaction increases and reaches a multicritical point. However, the exact diagonalization approach was not able to capture the essence of the dimer phase beyond the multicritical transition. Full article
(This article belongs to the Special Issue Ising Model: Recent Developments and Exotic Applications II)
Show Figures

Figure 1

16 pages, 1025 KiB  
Article
Dynamic Feature Extraction-Based Quadratic Discriminant Analysis for Industrial Process Fault Classification and Diagnosis
by Hanqi Li, Mingxing Jia and Zhizhong Mao
Entropy 2023, 25(12), 1664; https://doi.org/10.3390/e25121664 - 16 Dec 2023
Viewed by 1077
Abstract
This paper introduces a novel method for enhancing fault classification and diagnosis in dynamic nonlinear processes. The method focuses on dynamic feature extraction within multivariate time series data and utilizes dynamic reconstruction errors to augment the feature set. A fault classification procedure is [...] Read more.
This paper introduces a novel method for enhancing fault classification and diagnosis in dynamic nonlinear processes. The method focuses on dynamic feature extraction within multivariate time series data and utilizes dynamic reconstruction errors to augment the feature set. A fault classification procedure is then developed, using the weighted maximum scatter difference (WMSD) dimensionality reduction criterion and quadratic discriminant analysis (QDA) classifier. This method addresses the challenge of high-dimensional, sample-limited fault classification, offering early diagnosis capabilities for online samples with smaller amplitudes than the training set. Validation is conducted using a cold rolling mill simulation model, with performance compared to classical methods like linear discriminant analysis (LDA) and kernel Fisher discriminant analysis (KFD). The results demonstrate the superiority of the proposed method for reliable industrial process monitoring and fault diagnosis. Full article
Show Figures

Figure 1

15 pages, 752 KiB  
Article
Tunneling between Multiple Histories as a Solution to the Information Loss Paradox
by Pisin Chen, Misao Sasaki, Dong-han Yeom and Junggi Yoon
Entropy 2023, 25(12), 1663; https://doi.org/10.3390/e25121663 - 15 Dec 2023
Cited by 1 | Viewed by 771
Abstract
The information loss paradox associated with black hole Hawking evaporation is an unresolved problem in modern theoretical physics. In a recent brief essay, we revisited the evolution of the black hole entanglement entropy via the Euclidean path integral (EPI) of the quantum state [...] Read more.
The information loss paradox associated with black hole Hawking evaporation is an unresolved problem in modern theoretical physics. In a recent brief essay, we revisited the evolution of the black hole entanglement entropy via the Euclidean path integral (EPI) of the quantum state and allow for the branching of semi-classical histories along the Lorentzian evolution. We posited that there exist at least two histories that contribute to EPI, where one is an information-losing history, while the other is an information-preserving one. At early times, the former dominates EPI, while at the late times, the latter becomes dominant. By doing so, we recovered the essence of the Page curve, and thus, the unitarity, albeit with the turning point, i.e., the Page time, much shifted toward the late time. In this full-length paper, we fill in the details of our arguments and calculations to strengthen our notion. One implication of this modified Page curve is that the entropy bound may thus be violated. We comment on the similarity and difference between our approach and that of the replica wormholes and the islands’ conjectures. Full article
(This article belongs to the Special Issue The Black Hole Information Problem)
Show Figures

Figure 1

20 pages, 4203 KiB  
Article
Dynamic Risk Assessment of Voltage Violation in Distribution Networks with Distributed Generation
by Wei Hu, Fan Yang, Yu Shen, Zhichun Yang, Hechong Chen and Yang Lei
Entropy 2023, 25(12), 1662; https://doi.org/10.3390/e25121662 - 15 Dec 2023
Viewed by 721
Abstract
In response to the growing demand for economic and social development, there has been a significant increase in the integration of distributed generation (DG) into distribution networks. This paper proposes a dynamic risk assessment method for voltage violations in distribution networks with DG. [...] Read more.
In response to the growing demand for economic and social development, there has been a significant increase in the integration of distributed generation (DG) into distribution networks. This paper proposes a dynamic risk assessment method for voltage violations in distribution networks with DG. Firstly, considering the characteristics of random variables such as load and DG, a probability density function estimation method based on boundary kernel density estimation is proposed. This method accurately models the probability of random variables under different time and external environmental conditions, such as wind speed and global horizontal radiation. Secondly, to address the issue of correlated DG in the same region, an independent transformation method based on the Rosenblatt inverse transform is proposed, which enhances the accuracy of probabilistic load flow. Thirdly, a voltage violation severity index based on the utility function is proposed. This index, in combination with probabilistic load flow results, facilitates the quantitative assessment of voltage violation risks. Finally, the accuracy of the proposed method is verified on the IEEE-33 system. Full article
(This article belongs to the Topic Research Frontier in Renewable Energy Systems)
Show Figures

Figure 1

3 pages, 170 KiB  
Editorial
Recent Advances in Statistical Theory and Applications
by Augustine Wong and Xiaoping Shi
Entropy 2023, 25(12), 1661; https://doi.org/10.3390/e25121661 - 15 Dec 2023
Viewed by 782
Abstract
Complex data pose unique challenges for data processing [...] Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
16 pages, 1404 KiB  
Article
Criticality Analysis: Bio-Inspired Nonlinear Data Representation
by Tjeerd V. olde Scheper
Entropy 2023, 25(12), 1660; https://doi.org/10.3390/e25121660 - 14 Dec 2023
Cited by 1 | Viewed by 1088
Abstract
The representation of arbitrary data in a biological system is one of the most elusive elements of biological information processing. The often logarithmic nature of information in amplitude and frequency presented to biosystems prevents simple encapsulation of the information contained in the input. [...] Read more.
The representation of arbitrary data in a biological system is one of the most elusive elements of biological information processing. The often logarithmic nature of information in amplitude and frequency presented to biosystems prevents simple encapsulation of the information contained in the input. Criticality Analysis (CA) is a bio-inspired method of information representation within a controlled Self-Organised Critical system that allows scale-free representation. This is based on the concept of a reservoir of dynamic behaviour in which self-similar data will create dynamic nonlinear representations. This unique projection of data preserves the similarity of data within a multidimensional neighbourhood. The input can be reduced dimensionally to a projection output that retains the features of the overall data, yet has a much simpler dynamic response. The method depends only on the Rate Control of Chaos applied to the underlying controlled models, which allows the encoding of arbitrary data and promises optimal encoding of data given biologically relevant networks of oscillators. The CA method allows for a biologically relevant encoding mechanism of arbitrary input to biosystems, creating a suitable model for information processing in varying complexity of organisms and scale-free data representation for machine learning. Full article
Show Figures

Figure 1

24 pages, 1977 KiB  
Article
Semi-Supervised Variational Autoencoders for Out-of-Distribution Generation
by Frantzeska Lavda and Alexandros Kalousis
Entropy 2023, 25(12), 1659; https://doi.org/10.3390/e25121659 - 14 Dec 2023
Viewed by 937
Abstract
Humans are able to quickly adapt to new situations, learn effectively with limited data, and create unique combinations of basic concepts. In contrast, generalizing out-of-distribution (OOD) data and achieving combinatorial generalizations are fundamental challenges for machine learning models. Moreover, obtaining high-quality labeled examples [...] Read more.
Humans are able to quickly adapt to new situations, learn effectively with limited data, and create unique combinations of basic concepts. In contrast, generalizing out-of-distribution (OOD) data and achieving combinatorial generalizations are fundamental challenges for machine learning models. Moreover, obtaining high-quality labeled examples can be very time-consuming and expensive, particularly when specialized skills are required for labeling. To address these issues, we propose BtVAE, a method that utilizes conditional VAE models to achieve combinatorial generalization in certain scenarios and consequently to generate out-of-distribution (OOD) data in a semi-supervised manner. Unlike previous approaches that use new factors of variation during testing, our method uses only existing attributes from the training data but in ways that were not seen during training (e.g., small objects of a specific shape during training and large objects of the same shape during testing). Full article
(This article belongs to the Special Issue Deep Generative Modeling: Theory and Applications)
Show Figures

Figure 1

17 pages, 10190 KiB  
Article
Entangled-Based Quantum Wavelength-Division-Multiplexing and Multiple-Access Networks
by Marzieh Bathaee and Jawad A. Salehi
Entropy 2023, 25(12), 1658; https://doi.org/10.3390/e25121658 - 14 Dec 2023
Viewed by 937
Abstract
This paper investigates the mathematical model of the quantum wavelength-division-multiplexing (WDM) network based on the entanglement distribution with the least required wavelengths and passive devices. By adequately utilizing wavelength multiplexers, demultiplexers, and star couplers, N wavelengths are enough to distribute the entanglement among [...] Read more.
This paper investigates the mathematical model of the quantum wavelength-division-multiplexing (WDM) network based on the entanglement distribution with the least required wavelengths and passive devices. By adequately utilizing wavelength multiplexers, demultiplexers, and star couplers, N wavelengths are enough to distribute the entanglement among each pair of N users. Moreover, the number of devices employed is reduced by substituting a waveguide grating router for multiplexers and demultiplexers. Furthermore, this study examines implementing the BBM92 quantum key distribution in an entangled-based quantum WDM network. The proposed scheme in this paper may be applied to potential applications such as teleportation in entangled-based quantum WDM networks. Full article
(This article belongs to the Special Issue Quantum Communications Networks: Trends and Challenges)
Show Figures

Figure 1

20 pages, 3901 KiB  
Article
Auto-Encoding Generative Adversarial Networks towards Mode Collapse Reduction and Feature Representation Enhancement
by Yang Zou, Yuxuan Wang and Xiaoxiang Lu
Entropy 2023, 25(12), 1657; https://doi.org/10.3390/e25121657 - 13 Dec 2023
Cited by 1 | Viewed by 883
Abstract
Generative Adversarial Nets (GANs) are a kind of transformative deep learning framework that has been frequently applied to a large variety of applications related to the processing of images, video, speech, and text. However, GANs still suffer from drawbacks such as mode collapse [...] Read more.
Generative Adversarial Nets (GANs) are a kind of transformative deep learning framework that has been frequently applied to a large variety of applications related to the processing of images, video, speech, and text. However, GANs still suffer from drawbacks such as mode collapse and training instability. To address these challenges, this paper proposes an Auto-Encoding GAN, which is composed of a set of generators, a discriminator, an encoder, and a decoder. The set of generators is responsible for learning diverse modes, and the discriminator is used to distinguish between real samples and generated ones. The encoder maps generated and real samples to the embedding space to encode distinguishable features, and the decoder determines from which generator the generated samples come and from which mode the real samples come. They are jointly optimized in training to enhance the feature representation. Moreover, a clustering algorithm is employed to perceive the distribution of real and generated samples, and an algorithm for cluster center matching is accordingly constructed to maintain the consistency of the distribution, thus preventing multiple generators from covering a certain mode. Extensive experiments are conducted on two classes of datasets, and the results visually and quantitatively demonstrate the preferable capability of the proposed model for reducing mode collapse and enhancing feature representation. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

16 pages, 8636 KiB  
Article
A Conservative Memristive Chaotic System with Extreme Multistability and Its Application in Image Encryption
by Jian Li, Bo Liang, Xiefu Zhang and Zhixin Yu
Entropy 2023, 25(12), 1656; https://doi.org/10.3390/e25121656 - 13 Dec 2023
Viewed by 835
Abstract
In this work, a novel conservative memristive chaotic system is constructed based on a smooth memristor. In addition to generating multiple types of quasi-periodic trajectories within a small range of a single parameter, the amplitude of the system can be controlled by changing [...] Read more.
In this work, a novel conservative memristive chaotic system is constructed based on a smooth memristor. In addition to generating multiple types of quasi-periodic trajectories within a small range of a single parameter, the amplitude of the system can be controlled by changing the initial values. Moreover, the proposed system exhibits nonlinear dynamic characteristics, involving extreme multistability behavior of isomorphic and isomeric attractors. Finally, the proposed system is implemented using STMicroelectronics 32 and applied to image encryption. The excellent encryption performance of the conservative chaotic system is proven by an average correlation coefficient of 0.0083 and an information entropy of 7.9993, which provides a reference for further research on conservative memristive chaotic systems in the field of image encryption. Full article
(This article belongs to the Topic Advances in Nonlinear Dynamics: Methods and Applications)
Show Figures

Figure 1

12 pages, 442 KiB  
Article
The Spectrum of Low-pT J/ψ in Heavy-Ion Collisions in a Statistical Two-Body Fractal Model
by Huiqiang Ding, Luan Cheng, Tingting Dai, Enke Wang and Wei-Ning Zhang
Entropy 2023, 25(12), 1655; https://doi.org/10.3390/e25121655 - 13 Dec 2023
Viewed by 780
Abstract
We establish a statistical two-body fractal (STF) model to study the spectrum of J/ψ. J/ψ serves as a reliable probe in heavy-ion collisions. The distribution of J/ψ in hadron gas is influenced by flow, quantum and [...] Read more.
We establish a statistical two-body fractal (STF) model to study the spectrum of J/ψ. J/ψ serves as a reliable probe in heavy-ion collisions. The distribution of J/ψ in hadron gas is influenced by flow, quantum and strong interaction effects. Previous models have predominantly focused on one or two of these effects while neglecting the others, resulting in the inclusion of unconsidered effects in the fitted parameters. Here, we study the issue from a new point of view by analyzing the fact that all three effects induce a self-similarity structure, involving a J/ψ-π two-meson state and a J/ψ, π two-quark state, respectively. We introduce modification factor qTBS and q2 into the probability and entropy of charmonium. qTBS denotes the modification of self-similarity on J/ψ, q2 denotes that of self-similarity and strong interaction between c and c¯ on quarks. By solving the probability and entropy equations, we derive the values of qTBS and q2 at various collision energies and centralities. Substituting the value of qTBS into distribution function, we successfully obtain the transverse momentum spectrum of low-pT J/ψ, which demonstrates good agreement with experimental data. The STF model can be employed to investigate other mesons and resonance states. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

22 pages, 8766 KiB  
Article
Research on the Threshold Determination Method of the Duffing Chaotic System Based on Improved Permutation Entropy and Poincaré Mapping
by Jing Zhou, Yaan Li and Mingzhou Wang
Entropy 2023, 25(12), 1654; https://doi.org/10.3390/e25121654 - 13 Dec 2023
Cited by 1 | Viewed by 893
Abstract
The transition from a chaotic to a periodic state in the Duffing chaotic oscillator detection system is crucial in detecting weak signals. However, accurately determining the critical threshold for this transition remains a challenging problem. Traditional methods such as Melnikov theory, the Poincaré [...] Read more.
The transition from a chaotic to a periodic state in the Duffing chaotic oscillator detection system is crucial in detecting weak signals. However, accurately determining the critical threshold for this transition remains a challenging problem. Traditional methods such as Melnikov theory, the Poincaré section quantitative discrimination method, and experimental analyses based on phase diagram segmentation have limitations in accuracy and efficiency. In addition, they require large computational data and complex algorithms while having slow convergence. Improved permutation entropy incorporates signal amplitude information on the basis of permutation entropy and has better noise resistance. According to the characteristics of improved permutation entropy, a threshold determination method for the Duffing chaotic oscillator detection system based on improved permutation entropy (IPE) and Poincaré mapping (PM) is proposed. This new metric is called Poincaré mapping improved permutation entropy (PMIPE). The simulation results and the verification results of real underwater acoustic signals indicate that our proposed method outperforms traditional methods in terms of accuracy, simplicity, and stability. Full article
Show Figures

Figure 1

23 pages, 7450 KiB  
Article
Multispectral Remote Sensing Data Application in Modelling Non-Extensive Tsallis Thermodynamics for Mountain Forests in Northern Mongolia
by Robert Sandlersky, Nataliya Petrzhik, Tushigma Jargalsaikhan and Ivan Shironiya
Entropy 2023, 25(12), 1653; https://doi.org/10.3390/e25121653 - 13 Dec 2023
Viewed by 1296
Abstract
The imminent threat of Mongolian montane forests facing extinction due to climate change emphasizes the pressing need to study these ecosystems for sustainable development. Leveraging multispectral remote sensing data from Landsat 8 OLI TIRS (2013–2021), we apply Tsallis non-extensive thermodynamics to assess spatiotemporal [...] Read more.
The imminent threat of Mongolian montane forests facing extinction due to climate change emphasizes the pressing need to study these ecosystems for sustainable development. Leveraging multispectral remote sensing data from Landsat 8 OLI TIRS (2013–2021), we apply Tsallis non-extensive thermodynamics to assess spatiotemporal fluctuations in the absorbed solar energy budget (exergy, bound energy, internal energy increment) and organizational parameters (entropy, information increment, q-index) within the mountain taiga–meadow landscape. Using the principal component method, we discern three functional subsystems: evapotranspiration, heat dissipation, and a structural-informational component linked to bioproductivity. The interplay among these subsystems delineates distinct landscape cover states. By categorizing ecosystems (pixels) based on these processes, discrete states and transitional areas (boundaries and potential disturbances) emerge. Examining the temporal dynamics of ecosystems (pixels) within this three-dimensional coordinate space facilitates predictions of future landscape states. Our findings indicate that northern Mongolian montane forests utilize a smaller proportion of received energy for productivity compared to alpine meadows, which results in their heightened vulnerability to climate change. This approach deepens our understanding of ecosystem functioning and landscape dynamics, serving as a basis for evaluating their resilience amid ongoing climate challenges. Full article
(This article belongs to the Special Issue Entropy in Biological Systems)
Show Figures

Figure 1

21 pages, 451 KiB  
Article
A Statistical Approach to Neutron Stars’ Crust–Core Transition Density and Pressure
by Ilona Bednarek, Wiesław Olchawa, Jan Sładkowski and Jacek Syska
Entropy 2023, 25(12), 1652; https://doi.org/10.3390/e25121652 - 13 Dec 2023
Viewed by 703
Abstract
In this paper, a regression model between neutron star crust–core pressure and the symmetry energy characteristics was estimated using the Akaike information criterion and the adjusted coefficient of determination Radj2. The most probable value of the transition density, [...] Read more.
In this paper, a regression model between neutron star crust–core pressure and the symmetry energy characteristics was estimated using the Akaike information criterion and the adjusted coefficient of determination Radj2. The most probable value of the transition density, which should characterize the crust–core environment of the sought physical neutron star model, was determined based on the obtained regression function. An anti-correlation was found between this transition density and the main characteristic of the symmetry energy, i.e., its slope L. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
Show Figures

Figure 1

27 pages, 451 KiB  
Article
Practical NTRU Signcryption in the Standard Model
by Jianhua Yan, Xiuhua Lu, Muzi Li, Licheng Wang, Jingxian Zhou and Wenbin Yao
Entropy 2023, 25(12), 1651; https://doi.org/10.3390/e25121651 - 13 Dec 2023
Viewed by 908
Abstract
Based on the NTRU trapdoor used in NIST’s Falcon, a signcryption scheme following the sign-then-encrypt paradigm is constructed. The existing partitioning technique based on Waters hash over the lattice can not complete the security reduction in the standard model for the signature part [...] Read more.
Based on the NTRU trapdoor used in NIST’s Falcon, a signcryption scheme following the sign-then-encrypt paradigm is constructed. The existing partitioning technique based on Waters hash over the lattice can not complete the security reduction in the standard model for the signature part due to the “partiality” of the pre-image generated with the NTRU trapdoor. To address this, a variant of Waters hash over small integers is proposed and, the probability of the successful reduction is analyzed. The resulting signcryption achieves existential unforgeability under the adaptive chosen-message attacks. By utilizing the uniqueness of the secret and the noise in an NTRU instance, the tag used in encryption is eliminated. Furthermore, a method to construct tamper-sensitive lattice public key encryption is proposed. This approach implants the ciphertext-sensitive information into the lattice public key encryption and binds it to the encrypted information. The malleability to the public key ciphertext triggers the change of the message–signature pair so that the IND-CCA2 security of the entire ciphertext can be guaranteed by the signature for the message. Thanks to the rational design and the efficiency of the NTRU trapdoor, the computational overhead of the proposed scheme is reduced significantly compared to the existing lattice-based signcryption scheme, reaching orders of magnitude improvement in efficiency. The experiment shows that the proposed scheme is efficient. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
13 pages, 692 KiB  
Article
Fisher–Shannon Investigation of the Effect of Nonlinearity of Discrete Langevin Model on Behavior of Extremes in Generated Time Series
by Luciano Telesca and Zbigniew Czechowski
Entropy 2023, 25(12), 1650; https://doi.org/10.3390/e25121650 - 12 Dec 2023
Viewed by 861
Abstract
Diverse forms of nonlinearity within stochastic equations give rise to varying dynamics in processes, which may influence the behavior of extreme values. This study focuses on two nonlinear models of the discrete Langevin equation: one with a fixed diffusion function (M1) and the [...] Read more.
Diverse forms of nonlinearity within stochastic equations give rise to varying dynamics in processes, which may influence the behavior of extreme values. This study focuses on two nonlinear models of the discrete Langevin equation: one with a fixed diffusion function (M1) and the other with a fixed marginal distribution (M2), both characterized by a nonlinearity parameter. Extremes are defined according to the run theory with thresholds based on percentiles. The behavior of inter-extreme times and run lengths is examined by employing Fisher’s Information Measure and the Shannon Entropy. Our findings reveal a clear relationship between the entropic and informational measures and the nonlinearity of model M1—these measures decrease as the nonlinearity parameter increases. Similar relationships are evident for the M2 model, albeit to a lesser extent, even though the background data’s marginal distribution remains unaffected by this parameter. As thresholds increase, both the values of Fisher’s Information Measure and the Shannon Entropy also increase. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

25 pages, 15573 KiB  
Article
The Capabilities of Boltzmann Machines to Detect and Reconstruct Ising System’s Configurations from a Given Temperature
by Mauricio A. Valle
Entropy 2023, 25(12), 1649; https://doi.org/10.3390/e25121649 - 12 Dec 2023
Viewed by 943
Abstract
The restricted Boltzmann machine (RBM) is a generative neural network that can learn in an unsupervised way. This machine has been proven to help understand complex systems, using its ability to generate samples of the system with the same observed distribution. In this [...] Read more.
The restricted Boltzmann machine (RBM) is a generative neural network that can learn in an unsupervised way. This machine has been proven to help understand complex systems, using its ability to generate samples of the system with the same observed distribution. In this work, an Ising system is simulated, creating configurations via Monte Carlo sampling and then using them to train RBMs at different temperatures. Then, 1. the ability of the machine to reconstruct system configurations and 2. its ability to be used as a detector of configurations at specific temperatures are evaluated. The results indicate that the RBM reconstructs configurations following a distribution similar to the original one, but only when the system is in a disordered phase. In an ordered phase, the RBM faces levels of irreproducibility of the configurations in the presence of bimodality, even when the physical observables agree with the theoretical ones. On the other hand, independent of the phase of the system, the information embodied in the neural network weights is sufficient to discriminate whether the configurations come from a given temperature well. The learned representations of the RBM can discriminate system configurations at different temperatures, promising interesting applications in real systems that could help recognize crossover phenomena. Full article
(This article belongs to the Special Issue Ising Model: Recent Developments and Exotic Applications II)
Show Figures

Figure 1

21 pages, 5149 KiB  
Article
A Real-Time and Robust Neural Network Model for Low-Measurement-Rate Compressed-Sensing Image Reconstruction
by Pengchao Chen, Huadong Song, Yanli Zeng, Xiaoting Guo and Chaoqing Tang
Entropy 2023, 25(12), 1648; https://doi.org/10.3390/e25121648 - 12 Dec 2023
Viewed by 816
Abstract
Compressed sensing (CS) is a popular data compression theory for many computer vision tasks, but the high reconstruction complexity for images prevents it from being used in many real-world applications. Existing end-to-end learning methods achieved real time sensing but lack theory guarantee for [...] Read more.
Compressed sensing (CS) is a popular data compression theory for many computer vision tasks, but the high reconstruction complexity for images prevents it from being used in many real-world applications. Existing end-to-end learning methods achieved real time sensing but lack theory guarantee for robust reconstruction results. This paper proposes a neural network called RootsNet, which integrates the CS mechanism into the network to prevent error propagation. So, RootsNet knows what will happen if some modules in the network go wrong. It also implements real-time and successfully reconstructed extremely low measurement rates that are impossible for traditional optimization-theory-based methods. For qualitative validation, RootsNet is implemented in two real-world measurement applications, i.e., a near-field microwave imaging system and a pipeline inspection system, where RootsNet easily saves 60% more measurement time and 95% more data compared with the state-of-the-art optimization-theory-based reconstruction methods. Without losing generality, comprehensive experiments are performed on general datasets, including evaluating the key components in RootsNet, the reconstruction uncertainty, quality, and efficiency. RootsNet has the best uncertainty performance and efficiency, and achieves the best reconstruction quality under super low-measurement rates. Full article
Show Figures

Figure 1

14 pages, 2003 KiB  
Article
Results for Nonlinear Diffusion Equations with Stochastic Resetting
by Ervin K. Lenzi, Rafael S. Zola, Michely P. Rosseto, Renio S. Mendes, Haroldo V. Ribeiro, Luciano R. da Silva and Luiz R. Evangelista
Entropy 2023, 25(12), 1647; https://doi.org/10.3390/e25121647 - 12 Dec 2023
Viewed by 822
Abstract
In this study, we investigate a nonlinear diffusion process in which particles stochastically reset to their initial positions at a constant rate. The nonlinear diffusion process is modeled using the porous media equation and its extensions, which are nonlinear diffusion equations. We use [...] Read more.
In this study, we investigate a nonlinear diffusion process in which particles stochastically reset to their initial positions at a constant rate. The nonlinear diffusion process is modeled using the porous media equation and its extensions, which are nonlinear diffusion equations. We use analytical and numerical calculations to obtain and interpret the probability distribution of the position of the particles and the mean square displacement. These results are further compared and shown to agree with the results of numerical simulations. Our findings show that a system of this kind exhibits non-Gaussian distributions, transient anomalous diffusion (subdiffusion and superdiffusion), and stationary states that simultaneously depend on the nonlinearity and resetting rate. Full article
Show Figures

Graphical abstract

22 pages, 5319 KiB  
Article
On the Relationship between Feature Selection Metrics and Accuracy
by Elise Epstein, Naren Nallapareddy and Soumya Ray
Entropy 2023, 25(12), 1646; https://doi.org/10.3390/e25121646 - 11 Dec 2023
Viewed by 1018
Abstract
Feature selection metrics are commonly used in the machine learning pipeline to rank and select features before creating a predictive model. While many different metrics have been proposed for feature selection, final models are often evaluated by accuracy. In this paper, we consider [...] Read more.
Feature selection metrics are commonly used in the machine learning pipeline to rank and select features before creating a predictive model. While many different metrics have been proposed for feature selection, final models are often evaluated by accuracy. In this paper, we consider the relationship between common feature selection metrics and accuracy. In particular, we focus on misorderings: cases where a feature selection metric may rank features differently than accuracy would. We analytically investigate the frequency of misordering for a variety of feature selection metrics as a function of parameters that represent how a feature partitions the data. Our analysis reveals that different metrics have systematic differences in how likely they are to misorder features which can happen over a wide range of partition parameters. We then perform an empirical evaluation with different feature selection metrics on several real-world datasets to measure misordering. Our empirical results generally match our analytical results, illustrating that misordering features happens in practice and can provide some insight into the performance of feature selection metrics. Full article
(This article belongs to the Special Issue Information-Theoretic Criteria for Statistical Model Selection)
Show Figures

Figure 1

20 pages, 3391 KiB  
Article
From Black Holes Entropy to Consciousness: The Dimensions of the Brain Connectome
by Denis Le Bihan
Entropy 2023, 25(12), 1645; https://doi.org/10.3390/e25121645 - 11 Dec 2023
Viewed by 4449
Abstract
It has been shown that the theory of relativity can be applied physically to the functioning brain, so that the brain connectome should be considered as a four-dimensional spacetime entity curved by brain activity, just as gravity curves the four-dimensional spacetime of the [...] Read more.
It has been shown that the theory of relativity can be applied physically to the functioning brain, so that the brain connectome should be considered as a four-dimensional spacetime entity curved by brain activity, just as gravity curves the four-dimensional spacetime of the physical world. Following the most recent developments in modern theoretical physics (black hole entropy, holographic principle, AdS/CFT duality), we conjecture that consciousness can naturally emerge from this four-dimensional brain connectome when a fifth dimension is considered, in the same way that gravity emerges from a ‘flat’ four-dimensional quantum world, without gravitation, present at the boundaries of a five-dimensional spacetime. This vision makes it possible to envisage quantitative signatures of consciousness based on the entropy of the connectome and the curvature of spacetime estimated from data obtained by fMRI in the resting state (nodal activity and functional connectivity) and constrained by the anatomical connectivity derived from diffusion tensor imaging. Full article
(This article belongs to the Special Issue Computational Approaches and Modeling in Neuroscience)
Show Figures

Figure 1

19 pages, 1672 KiB  
Article
Scale-Free Chaos in the 2D Harmonically Confined Vicsek Model
by Rafael González-Albaladejo and Luis L. Bonilla
Entropy 2023, 25(12), 1644; https://doi.org/10.3390/e25121644 - 11 Dec 2023
Cited by 1 | Viewed by 772
Abstract
Animal motion and flocking are ubiquitous nonequilibrium phenomena that are often studied within active matter. In examples such as insect swarms, macroscopic quantities exhibit power laws with measurable critical exponents and ideas from phase transitions and statistical mechanics have been explored to explain [...] Read more.
Animal motion and flocking are ubiquitous nonequilibrium phenomena that are often studied within active matter. In examples such as insect swarms, macroscopic quantities exhibit power laws with measurable critical exponents and ideas from phase transitions and statistical mechanics have been explored to explain them. The widely used Vicsek model with periodic boundary conditions has an ordering phase transition but the corresponding homogeneous ordered or disordered phases are different from observations of natural swarms. If a harmonic potential (instead of a periodic box) is used to confine particles, then the numerical simulations of the Vicsek model display periodic, quasiperiodic, and chaotic attractors. The latter are scale-free on critical curves that produce power laws and critical exponents. Here, we investigate the scale-free chaos phase transition in two space dimensions. We show that the shape of the chaotic swarm on the critical curve reflects the split between the core and the vapor of insects observed in midge swarms and that the dynamic correlation function collapses only for a finite interval of small scaled times. We explain the algorithms used to calculate the largest Lyapunov exponents, the static and dynamic critical exponents, and compare them to those of the three-dimensional model. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

5 pages, 164 KiB  
Editorial
Signal and Information Processing in Networks
by Minyu Feng, Liang-Jian Deng and Feng Chen
Entropy 2023, 25(12), 1643; https://doi.org/10.3390/e25121643 - 11 Dec 2023
Viewed by 974
Abstract
Networks are omnipresent in the realm of science, serving as a central focus in our modern world [...] Full article
(This article belongs to the Special Issue Signal and Information Processing in Networks)
18 pages, 4030 KiB  
Article
A Loss Differentiation Method Based on Heterogeneous Ensemble Learning Model for Low Earth Orbit Satellite Networks
by Debin Wei, Chuanqi Guo, Li Yang and Yongqiang Xu
Entropy 2023, 25(12), 1642; https://doi.org/10.3390/e25121642 - 10 Dec 2023
Viewed by 737
Abstract
In light of the high bit error rate in satellite network links, the traditional transmission control protocol (TCP) fails to distinguish between congestion and wireless losses, and existing loss differentiation methods lack heterogeneous ensemble learning models, especially feature selection for loss differentiation, individual [...] Read more.
In light of the high bit error rate in satellite network links, the traditional transmission control protocol (TCP) fails to distinguish between congestion and wireless losses, and existing loss differentiation methods lack heterogeneous ensemble learning models, especially feature selection for loss differentiation, individual classifier selection methods, effective ensemble strategies, etc. A loss differentiation method based on heterogeneous ensemble learning (LDM-HEL) for low-Earth-orbit (LEO) satellite networks is proposed. This method utilizes the Relief and mutual information algorithms for selecting loss differentiation features and employs the least-squares support vector machine, decision tree, logistic regression, and K-nearest neighbor as individual learners. An ensemble strategy is designed using the stochastic gradient descent method to optimize the weights of individual learners. Simulation results demonstrate that the proposed LDM-HEL achieves higher accuracy rate, recall rate, and F1-score in the simulation scenario, and significantly improves throughput performance when applied to TCP. Compared with the integrated model LDM-satellite, the above indexes can be improved by 4.37%, 4.55%, 4.87%, and 9.28%, respectively. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop