Entropy doi: 10.3390/e26040347

Authors: Min Zeng Zhiqiang Wang Ying Xu Qiang Ma

The lattice Boltzmann method is employed in the current study to simulate the heat transfer characteristics of sinusoidal-temperature-distributed heat sources at the bottom of a square cavity under various conditions, including different amplitudes, phase angles, initial positions, and angular velocities. Additionally, a machine learning-based model is developed to accurately predict the Nusselt number in such a sinusoidal temperature distribution of heat source at the bottom of a square cavity. The results indicate that (1) in the phase angle range from 0 to &pi;, Nu basically shows a decreasing trend with an increase in phase angle. The decline in Nu at an accelerated rate is consistently observed when the phase angle reaches 4&pi;/16. The corresponding Nu decreases as the amplitude increases at the same phase angle. (2) The initial position of the sinusoidal-temperature-distributed heat source Lc significantly impacts the convective heat transfer in the cavity. Moreover, the decline in Nu was further exacerbated when Lc reached 7/16. (3) The optimal overall heat transfer effect was achieved when the angular velocity of the non-uniform heat source reached &pi;. As the angular velocity increases, the local Nu in the square cavity exhibits a gradual and oscillatory decline. Notably, it is observed that Nu at odd multiples of &pi; surpasses that at even multiples of &pi;. Furthermore, the current work integrates LBM with machine learning, enabling the development of a precise and efficient prediction model for simulating Nu under specific operational conditions. This research provides valuable insights into the application of machine learning in the field of heat transfer.

]]>Entropy doi: 10.3390/e26040346

Authors: Keshav Goyal Han Mao Kiah

We revisit the well-known Gilbert&ndash;Varshamov (GV) bound for constrained systems. In 1991, Kolesnik and Krachkovsky showed that the GV bound can be determined via the solution of an optimization problem. Later, in 1992, Marcus and Roth modified the optimization problem and improved the GV bound in many instances. In this work, we provide explicit numerical procedures to solve these two optimization problems and, hence, compute the bounds. We then show that the procedures can be further simplified when we plot the respective curves. In the case where the graph presentation comprises a single state, we provide explicit formulas for both bounds.

]]>Entropy doi: 10.3390/e26040345

Authors: Claudio Sanavio Edoardo Tignone Elisa Ercolessi

Quantum annealers are suited to solve several logistic optimization problems expressed in the QUBO formulation. However, the solutions proposed by the quantum annealers are generally not optimal, as thermal noise and other disturbing effects arise when the number of qubits involved in the calculation is too large. In order to deal with this issue, we propose the use of the classical branch-and-bound algorithm, that divides the problem into sub-problems which are described by a lower number of qubits. We analyze the performance of this method on two problems, the knapsack problem and the traveling salesman problem. Our results show the advantages of this method, that balances the number of steps that the algorithm has to make with the amount of error in the solution found by the quantum hardware that the user is willing to risk. The results are obtained using the commercially available quantum hardware D-Wave Advantage, and they outline the strategy for a practical application of the quantum annealers.

]]>Entropy doi: 10.3390/e26040344

Authors: Khouloud Mnassri Reza Farahbakhsh Noel Crespi

Social media platforms have surpassed cultural and linguistic boundaries, thus enabling online communication worldwide. However, the expanded use of various languages has intensified the challenge of online detection of hate speech content. Despite the release of multiple Natural Language Processing (NLP) solutions implementing cutting-edge machine learning techniques, the scarcity of data, especially labeled data, remains a considerable obstacle, which further requires the use of semisupervised approaches along with Generative Artificial Intelligence (Generative AI) techniques. This paper introduces an innovative approach, a multilingual semisupervised model combining Generative Adversarial Networks (GANs) and Pretrained Language Models (PLMs), more precisely mBERT and XLM-RoBERTa. Our approach proves its effectiveness in the detection of hate speech and offensive language in Indo-European languages (in English, German, and Hindi) when employing only 20% annotated data from the HASOC2019 dataset, thereby presenting significantly high performances in each of multilingual, zero-shot crosslingual, and monolingual training scenarios. Our study provides a robust mBERT-based semisupervised GAN model (SS-GAN-mBERT) that outperformed the XLM-RoBERTa-based model (SS-GAN-XLM) and reached an average F1 score boost of 9.23% and an accuracy increase of 5.75% over the baseline semisupervised mBERT model.

]]>Entropy doi: 10.3390/e26040343

Authors: Jeremy Holmes

This paper outlines the ways in which Karl Friston&rsquo;s work illuminates the everyday practice of psychotherapists. These include (a) how the strategic ambiguity of the therapist&rsquo;s stance brings, via &lsquo;transference&rsquo;, clients&rsquo; priors to light; (b) how the unstructured and negative capability of the therapy session reduces the salience of priors, enabling new top-down models to be forged; (c) how fostering self-reflection provides an additional step in the free energy minimization hierarchy; and (d) how Friston and Frith&rsquo;s &lsquo;duets for one&rsquo; can be conceptualized as a relational zone in which collaborative free energy minimization takes place without sacrificing complexity.

]]>Entropy doi: 10.3390/e26040342

Authors: Suchanun Piriyasatit Ercan Engin Kuruoglu Mehmet Sinan Ozeren

Geodetic observations through high-rate GPS time-series data allow the precise modeling of slow ground deformation at the millimeter level. However, significant attention has been devoted to utilizing these data for various earth science applications, including to determine crustal velocity fields and to detect significant displacement from earthquakes. The relationships inherent in these GPS displacement observations have not been fully explored. This study employs the sequential Monte Carlo method, specifically particle filtering (PF), to develop a time-varying analysis of the relationships among GPS displacement time-series within a network, with the aim of uncovering network dynamics. Additionally, we introduce a proposed graph representation to enhance the understanding of these relationships. Using the 1-Hz GEONET GNSS network data of the Tohoku-Oki Mw9.0 2011 as a demonstration, the results demonstrate successful parameter tracking that clarifies the observations&rsquo; underlying dynamics. These findings have potential applications in detecting anomalous displacements in the future.

]]>Entropy doi: 10.3390/e26040341

Authors: Alexandros K. Angelidis Konstantinos Goulas Charalampos Bratsas Georgios C. Makris Michael P. Hanias Stavros G. Stavrinides Ioannis E. Antoniou

We investigate whether it is possible to distinguish chaotic time series from random time series using network theory. In this perspective, we selected four methods to generate graphs from time series: the natural, the horizontal, the limited penetrable horizontal visibility graph, and the phase space reconstruction method. These methods claim that the distinction of chaos from randomness is possible by studying the degree distribution of the generated graphs. We evaluated these methods by computing the results for chaotic time series from the 2D Torus Automorphisms, the chaotic Lorenz system, and a random sequence derived from the normal distribution. Although the results confirm previous studies, we found that the distinction of chaos from randomness is not generally possible in the context of the above methodologies.

]]>Entropy doi: 10.3390/e26040340

Authors: Yujun Fan Xuejiao Wang Yangyang Li Aidong Lan Junwei Qiao

In order to find more excellent structural materials resistant to radiation damage, high-entropy alloys (HEAs) have been developed due to their characteristics of limited point defect diffusion such as lattice distortion and slow diffusion. Specially, refractory high-entropy alloys (RHEAs) that can adapt to a high-temperature environment are badly needed. In this study, TiZrHfNbMo0.1 RHEAs are selected for irradiation and nanoindentation experiments. We combined the mechanistic model for the depth-dependent hardness of ion-irradiated metals and the introduction of the scale factor f to modify the irradiation-hardening model in order to better describe the nanoindentation indentation process in the irradiated layer. Finally, it can be found that, with the increase in irradiation dose, a more serious lattice distortion caused by a higher defect density limits the expansion of the plastic zone.

]]>Entropy doi: 10.3390/e26040339

Authors: Liangguang Zhou Juliang Jin Rongxing Zhou Yi Cui Chengguo Wu Yuliang Zhou Shibao Dai Yuliang Zhang

The adjoint function of connection number has unique advantages in solving uncertainty problems of water resource complex systems, and has become an important frontier and research hotspot in the uncertainty research of water resource complex problems. However, in the rapid evolution of the adjoint function, some problems greatly limit the application of the adjoint function in the research of water resources. Therefore, based on bibliometric analysis, development, practical application issues, and prospects of the hot directions are analyzed. It is found that the development of the connection number of water resource set pair analysis can be divided into three stages: (1) relatively sluggish development before 2005, (2) a period of rapid advancement in adjoint function research spanning from 2005 to 2017, and (3) a subsequent surge post-2018. The introduction of the adjoint function of connection number promotes the continuous development of set pair analysis of water resources. Set pair potential and partial connection number are the crucial research directions of the adjoint function. Subtractive set pair potential has rapidly developed into a relatively independent and important trajectory. The research on connection entropy is comparatively less, which needs to be further strengthened, while that on adjacent connection number is even less. The adjoint function of set pair potential can be divided into three major categories: division set pair potential, exponential set pair potential, and subtraction set pair potential. The subtraction set pair potential, which retains the original dimension and quantity variation range of the connection number, is widely used in water resources and other fields. Coupled with the partial connection number, a series of new connection number adjoint functions have been developed. The partial connection number can be mainly divided into two categories: total partial connection number, and semi-partial connection number. Among these, the calculation expression and connotation of total partial connection numbers have not yet reached a consensus, accompanied by the slow development of high-order partial connection numbers. Semi-partial connection number can describe the mutual migration movement between different components of the connection number, which develops rapidly. With the limitations and current situation described above, promoting the exploration and application of the adjoint function of connection number in the field of water resources and other fields of complex systems has become the focus of future research.

]]>Entropy doi: 10.3390/e26040338

Authors: Nicolas Charpenay Maël Le Treust Aline Roumy

We investigate the zero-error coding for computing problems with encoder side information. An encoder provides access to a source X and is furnished with side information g(Y). It communicates with a decoder that possesses side information Y and aims to retrieve f(X,Y) with zero probability of error, where f and g are assumed to be deterministic functions. In previous work, we determined a condition that yields an analytic expression for the optimal rate R*(g); in particular, it covers the case where PX,Y is full support. In this article, we review this result and study the side information design problem, which consists of finding the best trade-offs between the quality of the encoder&rsquo;s side information g(Y) and R*(g). We construct two greedy algorithms that give an achievable set of points in the side information design problem, based on partition refining and coarsening. One of them runs in polynomial time.

]]>Entropy doi: 10.3390/e26040337

Authors: Didier Lairez

In the theory of special relativity, energy can be found in two forms: kinetic energy and rest mass. The potential energy of a body is actually stored in the form of rest mass, the interaction energy too, but temperature is not. Information acquired about a dynamical system can be potentially used to extract useful work from it. Hence, the &ldquo;mass&ndash;energy&ndash;information equivalence principle&rdquo; that has been recently proposed. In this paper, it is first recalled that for a thermodynamic system made of non-interacting entities at constant temperature, the internal energy is also constant. So, the energy involved in a variation in entropy (T&Delta;S) differs from a change in the potential energy stored or released and cannot be associated to a corresponding variation in mass of the system, even if it is expressed in terms of the quantity of information. This debate gives us the opportunity to deepen the notion of entropy seen as a quantity of information, to highlight the difference between logical irreversibility (a state-dependent property) and thermodynamical irreversibility (a path-dependent property), and to return to the nature of the link between energy and information that is dynamical.

]]>Entropy doi: 10.3390/e26040336

Authors: Bill Poirier Richard Lombardini

The theoretical connections between quantum trajectories and quantum dwell times, previously explored in the context of 1D time-independent stationary scattering applications, are here generalized for multidimensional time-dependent wavepacket applications for particles with spin 1/2. In addition to dwell times, trajectory-based dwell time distributions are also developed, and compared with previous distributions based on the dwell time operator and the flux&ndash;flux correlation function. Dwell time distributions are of interest, in part because they may be of experimental relevance. In addition to standard unipolar quantum trajectories, bipolar quantum trajectories are also considered, and found to relate more directly to the dwell time (and other quantum time) quantities of greatest relevance for scattering applications. Detailed calculations are performed for a benchmark 3D spin-1/2 particle application, considered previously in the context of computing quantum arrival times.

]]>Entropy doi: 10.3390/e26040335

Authors: Peter Trubey Bruno Sansó

We consider a constructive definition of the multivariate Pareto that factorizes the random vector into a radial component and an independent angular component. The former follows a univariate Pareto distribution, and the latter is defined on the surface of the positive orthant of the infinity norm unit hypercube. We propose a method for inferring the distribution of the angular component by identifying its support as the limit of the positive orthant of the unit p-norm spheres and introduce a projected gamma family of distributions defined through the normalization of a vector of independent random gammas to the space. This serves to construct a flexible family of distributions obtained as a Dirichlet process mixture of projected gammas. For model assessment, we discuss scoring methods appropriate to distributions on the unit hypercube. In particular, working with the energy score criterion, we develop a kernel metric that produces a proper scoring rule and presents a simulation study to compare different modeling choices using the proposed metric. Using our approach, we describe the dependence structure of extreme values in the integrated vapor transport (IVT), data describing the flow of atmospheric moisture along the coast of California. We find clear but heterogeneous geographical dependence.

]]>Entropy doi: 10.3390/e26040334

Authors: Jinwei Bai Zhenguo Yan Meiliang Mao Yankai Ma Dingwu Jiang

Based on a 5-point stencil and three 3-point stencils, a nonlinear multi-order weighted method adaptive to 5-3-3-3 stencils for shock capturing is presented in this paper. The form of the weighting function is the same as JS (Jiang&ndash;Shu) weighting; however, the smoothness indicator of the 5-point stencil adopts a special design with a higher-order leading term similar to the &tau; in Z weighting. The design maintains that the nonlinear weights satisfy sufficient conditions for the scheme to avoid degradation even near extreme points. By adjusting the linear weights to a specific value and using the &tau; in Z weighting, the method can be degraded to Z weighting. Analysis of linear weights shows that they do not affect the accuracy in the smooth region, and they can also adjust the resolution and discontinuity-capturing capability. Numerical tests of different hyperbolic conservation laws are conducted to test the performance of the newly designed nonlinear weights based on the weighted compact nonlinear scheme. The numerical results show that there are no obvious oscillations near the discontinuity, and the resolution of both the discontinuity and smooth regions is better than that of Z weights.

]]>Entropy doi: 10.3390/e26040333

Authors: Srihari Keshavamurthy

Recent progress towards understanding the mechanism of dynamical tunneling in Hamiltonian systems with three or more degrees of freedom (DoF) is reviewed. In contrast to systems with two degrees of freedom, the three or more degrees of freedom case presents several challenges. Specifically, in higher-dimensional phase spaces, multiple mechanisms for classical transport have significant implications for the evolution of initial quantum states. In this review, the importance of features on the Arnold web, a signature of systems with three or more DoF, to the mechanism of resonance-assisted tunneling is illustrated using select examples. These examples represent relevant models for phenomena such as intramolecular vibrational energy redistribution in isolated molecules and the dynamics of Bose&ndash;Einstein condensates trapped in optical lattices.

]]>Entropy doi: 10.3390/e26040332

Authors: Eric Grivel Bastien Berthelot Gaetan Colin Pierrick Legrand Vincent Ibanez

In various applications, multiscale entropy (MSE) is often used as a feature to characterize the complexity of the signals in order to classify them. It consists of estimating the sample entropies (SEs) of the signal under study and its coarse-grained (CG) versions, where the CG process amounts to (1) filtering the signal with an average filter whose order is the scale and (2) decimating the filter output by a factor equal to the scale. In this paper, we propose to derive a new variant of the MSE. Its novelty stands in the way to get the sequences at different scales by avoiding distortions during the decimation step. To this end, a linear-phase or null-phase low-pass filter whose cutoff frequency is well suited to the scale is used. Interpretations on how the MSE behaves and illustrations with a sum of sinusoids, as well as white and pink noises, are given. Then, an application to detect attentional tunneling is presented. It shows the benefit of the new approach in terms of p value when one aims at differentiating the set of MSEs obtained in the attentional tunneling state from the set of MSEs obtained in the nominal state. It should be noted that CG versions can be replaced not only for the MSE but also for other variants.

]]>Entropy doi: 10.3390/e26040331

Authors: Ren Xu Fei Lin Wenyi Shao Haoran Wang Fanping Meng Jun Li

Surrounded by the Shandong Peninsula, the Bohai Sea and Yellow Sea possess vast marine energy resources. An analysis of actual meteorological data from these regions indicates significant seasonality and intra-day uncertainty in wind and photovoltaic power generation. The challenge of scheduling to leverage the complementary characteristics of various renewable energy sources for maintaining grid stability is substantial. In response, we have integrated wave energy with offshore photovoltaic and wind power generation and propose a day-ahead and intra-day multi-time-scale rolling optimization scheduling strategy for the complementary dispatch of these three energy sources. Using real meteorological data from this maritime area, we employed a CNN-LSTM neural network to predict the power generation and load demand of the area on both day-ahead 24 h and intra-day 1 h time scales, with the DDPG algorithm applied for refined electricity management through rolling optimization scheduling of the forecast data. Simulation results demonstrate that the proposed strategy effectively meets load demands through complementary scheduling of wave power, wind power, and photovoltaic power generation based on the climatic characteristics of the Bohai and Yellow Sea regions, reducing the negative impacts of the seasonality and intra-day uncertainty of these three energy sources on the grid. Additionally, compared to the day-ahead scheduling strategy alone, the day-ahead and intra-day rolling optimization scheduling strategy achieved a reduction in system costs by 16.1% and 22% for a typical winter day and a typical summer day, respectively.

]]>Entropy doi: 10.3390/e26040330

Authors: Shahid Nawaz Muhammad Saleem Fedor V. Kusmartsev Dalaver H. Anjum

Complex systems are prevalent in various disciplines encompassing the natural and social sciences, such as physics, biology, economics, and sociology. Leveraging data science techniques, particularly those rooted in artificial intelligence and machine learning, offers a promising avenue for comprehending the intricacies of complex systems without necessitating detailed knowledge of underlying dynamics. In this paper, we demonstrate that multiscale entropy (MSE) is pivotal in describing the steady state of complex systems. Introducing the multiscale entropy dynamics (MED) methodology, we provide a framework for dissecting system dynamics and uncovering the driving forces behind their evolution. Our investigation reveals that the MED methodology facilitates the expression of complex system dynamics through a Generalized Nonlinear Schr&ouml;dinger Equation (GNSE) that thus demonstrates its potential applicability across diverse complex systems. By elucidating the entropic underpinnings of complexity, our study paves the way for a deeper understanding of dynamic phenomena. It offers insights into the behavior of complex systems across various domains.

]]>Entropy doi: 10.3390/e26040329

Authors: Zbigniew Haba

We study the Schr&ouml;dinger equation in quantum field theory (QFT) in its functional formulation. In this approach, quantum correlation functions can be expressed as classical expectation values over (complex) stochastic processes. We obtain a stochastic representation of the Schr&ouml;dinger time evolution on Wentzel&ndash;Kramers&ndash;Brillouin (WKB) states by means of the Wiener integral. We discuss QFT in a flat expanding metric and in de Sitter space-time. We calculate the evolution kernel in an expanding flat metric in the real-time formulation. We discuss a field interaction in pseudoRiemannian and Riemannian metrics showing that an inversion of the signature leads to some substantial simplifications of the singularity problems in QFT.

]]>Entropy doi: 10.3390/e26040328

Authors: Fengjiao Liang Qingyong Li Xiaobao Li Yang Liu Wen Wang

Automatic crack segmentation plays an essential role in maintaining the structural health of buildings and infrastructure. Despite the success in fully supervised crack segmentation, the costly pixel-level annotation restricts its application, leading to increased exploration in weakly supervised crack segmentation (WSCS). However, WSCS methods inevitably bring in noisy pseudo-labels, which results in large fluctuations. To address this problem, we propose a novel confidence-aware co-training (CAC) framework for WSCS. This framework aims to iteratively refine pseudo-labels, facilitating the learning of a more robust segmentation model. Specifically, a co-training mechanism is designed and constructs two collaborative networks to learn uncertain crack pixels, from easy to hard. Moreover, the dynamic division strategy is designed to divide the pseudo-labels based on the crack confidence score. Among them, the high-confidence pseudo-labels are utilized to optimize the initialization parameters for the collaborative network, while low-confidence pseudo-labels enrich the diversity of crack samples. Extensive experiments conducted on the Crack500, DeepCrack, and CFD datasets demonstrate that the proposed CAC significantly outperforms other WSCS methods.

]]>Entropy doi: 10.3390/e26040327

Authors: Amir Reza Jafari Praboda Rajapaksha Reza Farahbakhsh Guanlin Li Noel Crespi

Detecting the underlying human values within arguments is essential across various domains, ranging from social sciences to recent computational approaches. Identifying these values remains a significant challenge due to their vast numbers and implicit usage in discourse. This study explores the potential of emotion analysis as a key feature in improving the detection of human values and information extraction from this field. It aims to gain insights into human behavior by applying intensive analyses of different levels of human values. Additionally, we conduct experiments that integrate extracted emotion features to improve human value detection tasks. This approach holds the potential to provide fresh insights into the complex interactions between emotions and values within discussions, offering a deeper understanding of human behavior and decision making. Uncovering these emotions is crucial for comprehending the characteristics that underlie various values through data-driven analyses. Our experiment results show improvement in the performance of human value detection tasks in many categories.

]]>Entropy doi: 10.3390/e26040326

Authors: Xia Tan Cong Wang Shu-Zheng Yang

A hot NUT&ndash;Kerr&ndash;Newman black hole is a general stationary axisymmetric black hole. In this black hole spacetime, the dynamical equations of fermions at the horizon are modified by considering Lorentz breaking. The corrections to the Hawking temperature and Bekenstein&ndash;Hawking entropy at the horizon of the black hole are studied in depth. Based on the semiclassical theory correction, the Bekenstein&ndash;Hawking entropy of this black hole is quantum-corrected by considering the perturbation effect of the Planck constant &#8463;. The latter part of this paper presents a detailed discussion of the obtained results and their physical implications.

]]>Entropy doi: 10.3390/e26040325

Authors: Zhenyin Yao Wenzhong Yang Fuyuan Wei

In social networks, the occurrence of unexpected events rapidly catalyzes the widespread dissemination and further evolution of network public opinion. The advent of zero-shot stance detection aligns more closely with the characteristics of stance detection in today&rsquo;s digital age, where the absence of training examples for specific models poses significant challenges. This task necessitates models with robust generalization abilities to discern target-related, transferable stance features within training data. Recent advances in prompt-based learning have showcased notable efficacy in few-shot text classification. Such methods typically employ a uniform prompt pattern across all instances, yet they overlook the intricate relationship between prompts and instances, thereby failing to sufficiently direct the model towards learning task-relevant knowledge and information. This paper argues for the critical need to dynamically enhance the relevance between specific instances and prompts. Thus, we introduce a stance detection model underpinned by a gated multilayer perceptron (gMLP) and a prompt learning strategy, which is tailored for zero-shot stance detection scenarios. Specifically, the gMLP is utilized to capture semantic features of instances, coupled with a control gate mechanism to modulate the influence of the gate on prompt tokens based on the semantic context of each instance, thereby dynamically reinforcing the instance&ndash;prompt connection. Moreover, we integrate contrastive learning to empower the model with more discriminative feature representations. Experimental evaluations on the VAST and SEM16 benchmark datasets substantiate our method&rsquo;s effectiveness, yielding a 1.3% improvement over the JointCL model on the VAST dataset.

]]>Entropy doi: 10.3390/e26040324

Authors: Tatsuaki Tsuruyama

Recent advancements in information thermodynamics have revealed that information can be directly converted into mechanical work. Specifically, RNA transcription and nanopore sequencing serve as prime examples of this conversion, by reading information from a DNA template. This paper introduces an information thermodynamic model in which these molecular motors can move along the DNA template by converting the information read from the template DNA into their own motion. This process is a stochastic one, characterized by significant fluctuations in forward movement and is described by the Fokker&ndash;Planck equation, based on drift velocity and diffusion coefficients. In the current study, it is hypothesized that by utilizing the sequence information of the template DNA as mutual information, the fluctuations can be reduced, thereby biasing the forward movement on DNA and, consequently, reducing reading errors. Further research into the conversion of biological information by molecular motors could unveil new applications, insights, and important findings regarding the characteristics of information processing in biology.

]]>Entropy doi: 10.3390/e26040323

Authors: Adrian-Josue Guel-Cortez Eun-jin Kim Mohamed W. Mehrez

Controlling the time evolution of a probability distribution that describes the dynamics of a given complex system is a challenging problem. Achieving success in this endeavour will benefit multiple practical scenarios, e.g., controlling mesoscopic systems. Here, we propose a control approach blending the model predictive control technique with insights from information geometry theory. Focusing on linear Langevin systems, we use model predictive control online optimisation capabilities to determine the system inputs that minimise deviations from the geodesic of the information length over time, ensuring dynamics with minimum &ldquo;geometric information variability&rdquo;. We validate our methodology through numerical experimentation on the Ornstein&ndash;Uhlenbeck process and Kramers equation, demonstrating its feasibility. Furthermore, in the context of the Ornstein&ndash;Uhlenbeck process, we analyse the impact on the entropy production and entropy rate, providing a physical understanding of the effects of minimum information variability control.

]]>Entropy doi: 10.3390/e26040322

Authors: Michael Lass Tobias Kenter Christian Plessl Martin Brehm

We present a novel approach to characterize and quantify microheterogeneity and microphase separation in computer simulations of complex liquid mixtures. Our post-processing method is based on local density fluctuations of the different constituents in sampling spheres of varying size. It can be easily applied to both molecular dynamics (MD) and Monte Carlo (MC) simulations, including periodic boundary conditions. Multidimensional correlation of the density distributions yields a clear picture of the domain formation due to the subtle balance of different interactions. We apply our approach to the example of force field molecular dynamics simulations of imidazolium-based ionic liquids with different side chain lengths at different temperatures, namely 1-ethyl-3-methylimidazolium chloride, 1-hexyl-3-methylimidazolium chloride, and 1-decyl-3-methylimidazolium chloride, which are known to form distinct liquid domains. We put the results into the context of existing microheterogeneity analyses and demonstrate the advantages and sensitivity of our novel method. Furthermore, we show how to estimate the configuration entropy from our analysis, and we investigate voids in the system. The analysis has been implemented into our program package TRAVIS and is thus available as free software.

]]>Entropy doi: 10.3390/e26040321

Authors: Mingfeng Li Xin Li Mianning Hu Deyu Yuan

In underground industries, practitioners frequently employ argots to communicate discreetly and evade surveillance by investigative agencies. Proposing an innovative approach using word vectors and large language models, we aim to decipher and understand the myriad of argots in these industries, providing crucial technical support for law enforcement to detect and combat illicit activities. Specifically, positional differences in semantic space distinguish argots, and pre-trained language models&rsquo; corpora are crucial for interpreting them. Expanding on these concepts, the article assesses the semantic coherence of word vectors in the semantic space based on the concept of information entropy. Simultaneously, we devised a labeled argot dataset, MNGG, and developed an argot recognition framework named CSRMECT, along with an argot interpretation framework called LLMResolve. These frameworks leverage the MECT model, the large language model, prompt engineering, and the DBSCAN clustering algorithm. Experimental results demonstrate that the CSRMECT framework outperforms the current optimal model by 10% in terms of the F1 value for argot recognition on the MNGG dataset, while the LLMResolve framework achieves a 4% higher accuracy in interpretation compared to the current optimal model.The related experiments undertaken also indicate a potential correlation between vector information entropy and model performance.

]]>Entropy doi: 10.3390/e26040320

Authors: Mustapha Bounoua Giulio Franzese Pietro Michiardi

Multimodal datasets are ubiquitous in modern applications, and multimodal Variational Autoencoders are a popular family of models that aim to learn a joint representation of different modalities. However, existing approaches suffer from a coherence&ndash;quality tradeoff in which models with good generation quality lack generative coherence across modalities and vice versa. In this paper, we discuss the limitations underlying the unsatisfactory performance of existing methods in order to motivate the need for a different approach. We propose a novel method that uses a set of independently trained and unimodal deterministic autoencoders. Individual latent variables are concatenated into a common latent space, which is then fed to a masked diffusion model to enable generative modeling. We introduce a new multi-time training method to learn the conditional score network for multimodal diffusion. Our methodology substantially outperforms competitors in both generation quality and coherence, as shown through an extensive experimental campaign.

]]>Entropy doi: 10.3390/e26040319

Authors: Hongjuan Gao Hui Wang Shijie Zhao

In the acquisition process of 3D cultural relics, it is common to encounter noise. To facilitate the generation of high-quality 3D models, we propose an approach based on graph signal processing that combines color and geometric features to denoise the point cloud. We divide the 3D point cloud into patches based on self-similarity theory and create an appropriate underlying graph with a Markov property. The features of the vertices in the graph are represented using 3D coordinates, normal vectors, and color. We formulate the point cloud denoising problem as a maximum a posteriori (MAP) estimation problem and use a graph Laplacian regularization (GLR) prior to identifying the most probable noise-free point cloud. In the denoising process, we moderately simplify the 3D point to reduce the running time of the denoising algorithm. The experimental results demonstrate that our proposed approach outperforms five competing methods in both subjective and objective assessments. It requires fewer iterations and exhibits strong robustness, effectively removing noise from the surface of cultural relic point clouds while preserving fine-scale 3D features such as texture and ornamentation. This results in more realistic 3D representations of cultural relics.

]]>Entropy doi: 10.3390/e26040318

Authors: Evgeny Kagan Irad Ben-Gal

The paper addresses the problem of distinguishing the leading agents in the group. The problem is considered in the framework of classification problems, where the agents in the group select the items with respect to certain properties. The suggested method of distinguishing the leading agents utilizes the connectivity between the agents and the Rokhlin distance between the subgroups of the agents. The method is illustrated by numerical examples. The method can be useful in considering the division of labor in swarm dynamics and in the analysis of the data fusion in the tasks based on the wisdom of the crowd techniques.

]]>Entropy doi: 10.3390/e26040317

Authors: Xiaoxiang Jin Gangsan Kim Sangwon Chae Hong-Yeop Song

In this paper, we propose the zero-correlation-zone (ZCZ) of radius r on two-dimensional m&times;n sonar sequences and define the (m,n,r) ZCZ sonar sequences. We also define some new optimality of an (m,n,r) ZCZ sonar sequence which has the largest r for given m and n. Because of the ZCZ for perfect autocorrelation, we are able to relax the distinct difference property of the conventional sonar sequences, and hence, the autocorrelation of ZCZ sonar sequences outside ZCZ may not be upper bounded by 1. We may sometimes require such an ideal autocorrelation outside ZCZ, and we define ZCZ-DD sonar sequences, indicating that it has an additional distinct difference (DD) property. We first derive an upper bound on the ZCZ radius r in terms of m and n&ge;m. We next propose some constructions for (m,n,r) ZCZ sonar sequences, which leads to some very good constructive lower bound on r. Furthermore, this construction suggests that for m and r, the parameter n can be as large as possible indefinitely. We present some exhaustive search results on the existence of (m,n,r) ZCZ sonar sequences for some small values of r. For ZCZ-DD sonar sequences, we prove that some variations of Costas arrays construct some ZCZ-DD sonar sequences with ZCZ radius r=2. We also provide some exhaustive search results on the existence of (m,n,r) ZCZ-DD sonar sequences. Lots of open problems are listed at the end.

]]>Entropy doi: 10.3390/e26040316

Authors: Amal Altamimi Belgacem Ben Youssef

Rapid and continuous advancements in remote sensing technology have resulted in finer resolutions and higher acquisition rates of hyperspectral images (HSIs). These developments have triggered a need for new processing techniques brought about by the confined power and constrained hardware resources aboard satellites. This article proposes two novel lossless and near-lossless compression methods, employing our recent seed generation and quadrature-based square rooting algorithms, respectively. The main advantage of the former method lies in its acceptable complexity utilizing simple arithmetic operations, making it suitable for real-time onboard compression. In addition, this near-lossless compressor could be incorporated for hard-to-compress images offering a stabilized reduction at nearly 40% with a maximum relative error of 0.33 and a maximum absolute error of 30. Our results also show that a lossless compression performance, in terms of compression ratio, of up to 2.6 is achieved when testing with hyperspectral images from the Corpus dataset. Further, an improvement in the compression rate over the state-of-the-art k2-raster technique is realized for most of these HSIs by all four variations of our proposed lossless compression method. In particular, a data reduction enhancement of up to 29.89% is realized when comparing their respective geometric mean values.

]]>Entropy doi: 10.3390/e26040315

Authors: Guozheng Yang Yongheng Zhang Yuliang Lu Yi Xie Jiayi Yu

Network security situational awareness (NSSA) aims to capture, understand, and display security elements in large-scale network environments in order to predict security trends in the relevant network environment. With the internet&rsquo;s increasingly large scale, increasingly complex structure, and gradual diversification of components, the traditional single-layer network topology model can no longer meet the needs of network security analysis. Therefore, we conduct research based on a multi-layer network model for network security situational awareness, which is characterized by the three-layer network structure of a physical device network, a business application network, and a user role network. Its network characteristics require new assessment methods, so we propose a multi-layer network link importance assessment metric: the multi-layer-dependent link entropy (MDLE). On the one hand, the MDLE comprehensively evaluates the connectivity importance of links by fitting the link-local betweenness centrality and mapping entropy. On the other hand, it relies on the link-dependent mechanism to better aggregate the link importance contributions in each network layer. The experimental results show that the MDLE has better ordering monotonicity during critical link discovery and a higher destruction efficacy in destruction simulations compared to classical link importance metrics, thus better adapting to the critical link discovery requirements of a multi-layer network topology.

]]>Entropy doi: 10.3390/e26040314

Authors: Ali Mostafazadeh

We consider some basic problems associated with quantum mechanics of systems having a time-dependent Hilbert space. We provide a consistent treatment of these systems and address the possibility of describing them in terms of a time-independent Hilbert space. We show that in general the Hamiltonian operator does not represent an observable of the system even if it is a self-adjoint operator. This is related to a hidden geometric aspect of quantum mechanics arising from the presence of an operator-valued gauge potential. We also offer a careful treatment of quantum systems whose Hilbert space is obtained by endowing a time-independent vector space with a time-dependent inner product.

]]>Entropy doi: 10.3390/e26040313

Authors: Luca Razzoli Gabriele Cenedese Maria Bondani Giuliano Benenti

Quantum walks have proven to be a universal model for quantum computation and to provide speed-up in certain quantum algorithms. The discrete-time quantum walk (DTQW) model, among others, is one of the most suitable candidates for circuit implementation due to its discrete nature. Current implementations, however, are usually characterized by quantum circuits of large size and depth, which leads to a higher computational cost and severely limits the number of time steps that can be reliably implemented on current quantum computers. In this work, we propose an efficient and scalable quantum circuit implementing the DTQW on the 2n-cycle based on the diagonalization of the conditional shift operator. For t time steps of the DTQW, the proposed circuit requires only O(n2+nt) two-qubit gates compared to the O(n2t) of the current most efficient implementation based on quantum Fourier transforms. We test the proposed circuit on an IBM quantum device for a Hadamard DTQW on the 4-cycle and 8-cycle characterized by periodic dynamics and by recurrent generation of maximally entangled single-particle states. Experimental results are meaningful well beyond the regime of few time steps, paving the way for reliable implementation and use on quantum computers.

]]>Entropy doi: 10.3390/e26040312

Authors: Michel Broniatowski Wolfgang Stummer

It is well known that in information theory&mdash;as well as in the adjacent fields of statistics, machine learning and artificial intelligence&mdash;it is essential to quantify the dissimilarity between objects of uncertain/imprecise/inexact/vague information; correspondingly, constrained optimization is of great importance, too. In view of this, we define the dissimilarity-measure-natured generalized &phi;&ndash;divergences between fuzzy sets, &nu;&ndash;rung orthopair fuzzy sets, extended representation type &nu;&ndash;rung orthopair fuzzy sets as well as between those fuzzy set types and vectors. For those, we present how to tackle corresponding constrained minimization problems by appropriately applying our recently developed dimension-free bare (pure) simulation method. An analogous program is carried out by defining and optimizing generalized &phi;&ndash;divergences between (rescaled) basic belief assignments as well as between (rescaled) basic belief assignments and vectors.

]]>Entropy doi: 10.3390/e26040311

Authors: Pascal A. Schirmer Iosif Mporas

In this article, the topic of time series modelling is discussed. It highlights the criticality of analysing and forecasting time series data across various sectors, identifying five primary application areas: denoising, forecasting, nonlinear transient modelling, anomaly detection, and degradation modelling. It further outlines the mathematical frameworks employed in a time series modelling task, categorizing them into statistical, linear algebra, and machine- or deep-learning-based approaches, with each category serving distinct dimensions and complexities of time series problems. Additionally, the article reviews the extensive literature on time series modelling, covering statistical processes, state space representations, and machine and deep learning applications in various fields. The unique contribution of this work lies in its presentation of a Python-based toolkit for time series modelling (PyDTS) that integrates popular methodologies and offers practical examples and benchmarking across diverse datasets.

]]>Entropy doi: 10.3390/e26040308

Authors: Xinkai Sun Sanguo Zhang Shuangge Ma

In the classification task, label noise has a significant impact on models&rsquo; performance, primarily manifested in the disruption of prediction consistency, thereby reducing the classification accuracy. This work introduces a novel prediction consistency regularization that mitigates the impact of label noise on neural networks by imposing constraints on the prediction consistency of similar samples. However, determining which samples should be similar is a primary challenge. We formalize the similar sample identification as a clustering problem and employ twin contrastive clustering (TCC) to address this issue. To ensure similarity between samples within each cluster, we enhance TCC by adjusting clustering prior to distribution using label information. Based on the adjusted TCC&rsquo;s clustering results, we first construct the prototype for each cluster and then formulate a prototype-based regularization term to enhance prediction consistency for the prototype within each cluster and counteract the adverse effects of label noise. We conducted comprehensive experiments using benchmark datasets to evaluate the effectiveness of our method under various scenarios with different noise rates. The results explicitly demonstrate the enhancement in classification accuracy. Subsequent analytical experiments confirm that the proposed regularization term effectively mitigates noise and that the adjusted TCC enhances the quality of similar sample recognition.

]]>Entropy doi: 10.3390/e26040310

Authors: Peter H. Yoon Rodrigo A. López Chadi S. Salem John W. Bonnell Sunjung Kim

The quiet-time solar wind electrons feature non-thermal characteristics when viewed from the perspective of their velocity distribution functions. They typically have an appearance of being composed of a denser thermal &ldquo;core&rdquo; population plus a tenuous energetic &ldquo;halo&rdquo; population. At first, such a feature was empirically fitted with the kappa velocity space distribution function, but ever since the ground-breaking work by Tsallis, the space physics community has embraced the potential implication of the kappa distribution as reflecting the non-extensive nature of the space plasma. From the viewpoint of microscopic plasma theory, the formation of the non-thermal electron velocity distribution function can be interpreted in terms of the plasma being in a state of turbulent quasi-equilibrium. Such a finding brings forth the possible existence of a profound inter-relationship between the non-extensive statistical state and the turbulent quasi-equilibrium state. The present paper further develops the idea of solar wind electrons being in the turbulent equilibrium, but, unlike the previous model, which involves the electrostatic turbulence near the plasma oscillation frequency (i.e., Langmuir turbulence), the present paper considers the impact of transverse electromagnetic turbulence, particularly, the turbulence in the whistler-mode frequency range. It is found that the coupling of spontaneously emitted thermal fluctuations and the background turbulence leads to the formation of a non-thermal electron velocity distribution function of the type observed in the solar wind during quiet times. This demonstrates that the whistler-range turbulence represents an alternative mechanism for producing the kappa-like non-thermal distribution, especially close to the Sun and in the near-Earth space environment.

]]>Entropy doi: 10.3390/e26040309

Authors: Nicholas Savino Jacob Leamer Ravi Saripalli Wenlei Zhang Denys Bondar Ryan Glasser

Free-space optical (FSO) communication can be subject to various types of distortion and loss as the signal propagates through non-uniform media. In experiment and simulation, we demonstrate that the state of polarization and degree of polarization of light passed though underwater bubbles, causing turbulence, is preserved. Our experimental setup serves as an efficient, low cost alternative approach to long distance atmospheric or underwater testing. We compare our experimental results with those of simulations, in which we model underwater bubbles, and separately, atmospheric turbulence. Our findings suggest potential improvements in polarization based FSO communication schemes.

]]>Entropy doi: 10.3390/e26040307

Authors: Maciej Wołoszyn

The polarization of opinions and difficulties in reaching a consensus are central problems of many modern societies. Understanding the dynamics governing those processes is, therefore, one of the main aims of sociophysics. In this work, the Sznajd model of opinion dynamics is investigated with Monte Carlo simulations performed on four different regular lattices: triangular, honeycomb, and square with von Neumann or Moore neighborhood. The main objective is to discuss the interplay of the probability of convincing (conformity) and mass media (external) influence and to provide the details of the possible phase transitions. The results indicate that, while stronger bonds and openness to discussion and argumentation may help in reaching a consensus, external influence becomes destructive at different levels depending on the lattice.

]]>Entropy doi: 10.3390/e26040306

Authors: Michail Gkagkos Charalambos D. Charalambous

The main focus of this paper is the derivation of the structural properties of the test channels of Wyner&rsquo;s operational information rate distortion function (RDF), R&macr;(&Delta;X), for arbitrary abstract sources and, subsequently, the derivation of additional properties for a tuple of multivariate correlated, jointly independent, and identically distributed Gaussian random variables, {Xt,Yt}t=1&infin;, Xt:&Omega;&rarr;Rnx, Yt:&Omega;&rarr;Rny, with average mean-square error at the decoder and the side information, {Yt}t=1&infin;, available only at the decoder. For the tuple of multivariate correlated Gaussian sources, we construct optimal test channel realizations which achieve the informational RDF, R&macr;(&Delta;X)=&#9653;infM(&Delta;X)I(X;Z|Y), where M(&Delta;X) is the set of auxiliary RVs Z such that PZ|X,Y=PZ|X, X^=f(Y,Z), and E{||X&minus;X^||2}&le;&Delta;X. We show the following fundamental structural properties: (1) Optimal test channel realizations that achieve the RDF and satisfy conditional independence, PX|X^,Y,Z=PX|X^,Y=PX|X^,EX|X^,Y,Z=EX|X^=X^. (2) Similarly, for the conditional RDF, RX|Y(&Delta;X), when the side information is available to both the encoder and the decoder, we show the equality R&macr;(&Delta;X)=RX|Y(&Delta;X). (3) We derive the water-filling solution for RX|Y(&Delta;X).

]]>Entropy doi: 10.3390/e26040305

Authors: Ivan B. Djordjevic Vijay Nafria

An entanglement-based continuous variable (CV) QKD scheme is proposed, performing information reconciliation over an entanglement-assisted link. The same entanglement generation source is used in both raw key transmission and information reconciliation. The entanglement generation source employs only low-cost devices operated in the C-band. The proposed CV-QKD scheme with information reconciliation over an entanglement-assisted link significantly outperforms the corresponding CV-QKD scheme with information reconciliation over an authenticated public channel. It also outperforms the CV-QKD scheme in which a classical free-space optical communication link is used to perform information reconciliation. An experimental demonstration over the free-space optical testbed established at the University of Arizona campus indicates that the proposed CV-QKD can operate in strong turbulence regimes. To improve the secret key rate performance further, adaptive optics is used.

]]>Entropy doi: 10.3390/e26040304

Authors: Di Pei Jianhai Yue Jing Jiao

Vibration signal analysis is an important means for bearing fault diagnosis. Affected by the vibration of other machine parts, external noise and the vibration transmission path, the impulses induced by a bearing defect in the measured vibrations are very weak. Blind deconvolution (BD) methods can counteract the effect of the transmission path and enhance the fault impulses. Most BD methods highlight fault features of the filtered signals by impulse-featured objective functions (OFs). However, residual noise in the filtered signals has not been well tackled. To overcome this problem, a fuzzy entropy-assisted deconvolution (FEAD) method is proposed. First, FEAD takes advantage of the high noise sensitivity of fuzzy entropy (FuzzyEn) and constructs a weighted FuzzyEn&ndash;kurtosis OF to enhance the fault impulses while suppressing noise interference. Then, the PSO algorithm is used to iteratively solve the optimal inverse deconvolution filter. Finally, envelope spectrum analysis is performed on the filtered signal to realize bearing fault diagnosis. The feasibility of FEAD was first verified by the bearing fault simulation signals at constant and variable speeds. The bearing test signals from Case Western Reserve University (CWRU), the railway wheelset and the test bench validated the good performance of FEAD in fault feature enhancement. A comparison with and quantitative results for the other state-of-the-art BD methods indicated the superiority of the proposed method.

]]>Entropy doi: 10.3390/e26040303

Authors: Mahault Albarracin Riddhi J. Pitliya Toby St. Clere Smithe Daniel Ari Friedman Karl Friston Maxwell J. D. Ramstead

In this paper, we unite concepts from Husserlian phenomenology, the active inference framework in theoretical biology, and category theory in mathematics to develop a comprehensive framework for understanding social action premised on shared goals. We begin with an overview of Husserlian phenomenology, focusing on aspects of inner time-consciousness, namely, retention, primal impression, and protention. We then review active inference as a formal approach to modeling agent behavior based on variational (approximate Bayesian) inference. Expanding upon Husserl&rsquo;s model of time consciousness, we consider collective goal-directed behavior, emphasizing shared protentions among agents and their connection to the shared generative models of active inference. This integrated framework aims to formalize shared goals in terms of shared protentions, and thereby shed light on the emergence of group intentionality. Building on this foundation, we incorporate mathematical tools from category theory, in particular, sheaf and topos theory, to furnish a mathematical image of individual and group interactions within a stochastic environment. Specifically, we employ morphisms between polynomial representations of individual agent models, allowing predictions not only of their own behaviors but also those of other agents and environmental responses. Sheaf and topos theory facilitates the construction of coherent agent worldviews and provides a way of representing consensus or shared understanding. We explore the emergence of shared protentions, bridging the phenomenology of temporal structure, multi-agent active inference systems, and category theory. Shared protentions are highlighted as pivotal for coordination and achieving common objectives. We conclude by acknowledging the intricacies stemming from stochastic systems and uncertainties in realizing shared goals.

]]>Entropy doi: 10.3390/e26040302

Authors: Shiqi Liu Yan Zhang Shurui Fan

Mobile robot olfaction of toxic and hazardous odor sources is of great significance in anti-terrorism, disaster prevention, and control scenarios. Aiming at the problems of low search efficiency and easily falling into a local optimum of the current odor source localization strategies, the paper proposes the adaptive space-aware Infotaxis II algorithm. To improve the tracking efficiency of robots, a new reward function is designed by considering the space information and emphasizing the exploration behavior of robots. Considering the enhancement in exploratory behavior, an adaptive navigation-updated mechanism is proposed to adjust the movement range of robots in real time through information entropy to avoid an excessive exploration behavior during the search process, which may lead the robot to fall into a local optimum. Subsequently, an improved adaptive cosine salp swarm algorithm is applied to confirm the optimal information adaptive parameter. Comparative simulation experiments between ASAInfotaxis II and the classical search strategies are carried out in 2D and 3D scenarios regarding the search efficiency and search behavior, which show that ASAInfotaxis II is competent to improve the search efficiency to a larger extent and achieves a better balance between exploration and exploitation behaviors.

]]>Entropy doi: 10.3390/e26040301

Authors: Haiyong Wang Chentao Lu

JPEG Reversible Data Hiding (RDH) is a method designed to extract hidden data from a marked image and perfectly restore the image to its original JPEG form. However, while existing RDH methods adaptively manage the visual distortion caused by embedded data, they often neglect the concurrent increase in file size. In rectifying this oversight, we have designed a new JPEG RDH scheme that addresses all influential metrics during the embedding phase and a dynamic frequency selection strategy with recoverable frequency order after data embedding. The process initiates with a pre-processing phase of blocks and the subsequent selection of frequencies. Utilizing a two-dimensional (2D) mapping strategy, we then compute the visual distortion and file size increment (FSI) for each image block by examining non-zero alternating current (AC) coefficient pairs (NZACPs) and their corresponding run lengths. Finally, we select appropriate block groups based on the influential metrics of each block group and proceed with data embedding by 2D histogram shifting (HS). Extensive experimentation demonstrates how our method&rsquo;s efficiently and consistently outperformed existing techniques with a superior peak signal-to-noise Ratio (PSNR) and optimized FSI.

]]>Entropy doi: 10.3390/e26040300

Authors: Hiroki Murakami Norimasa Yamada

Human movements are governed by a tradeoff between speed and accuracy. Previous studies that have investigated the tradeoff relationship in sports movements involving whole-body movements have been limited to examining the relationship from the perspective of competition-specific movements, and the findings on whether the relationship is valid have not been unified. Therefore, this study incorporated a vertical jump task with the introduction of a condition in which landing position control was added to evaluate the essence of a sports movement that requires both speed and accuracy. Accuracy was examined using a method that quantifies the coordinates of the landing and takeoff positions using entropy. The mechanism of that tradeoff was then examined by confirming the phenomenon and analyzing the 3D vector trajectories. An increase in accuracy and a decrease in speed were observed when the landing position was the control target, even in the vertical jumping task normally performed at maximum effort, and the 3D velocity vector was characterized by the following: a reduced scalar and a more vertical direction. While the entropy from the takeoff to the landing position seemed to decrease when the accuracy of the landing position improved, the following noteworthy results were obtained given the characteristics of the vertical jump. Unlike traditional feedback control in the entropy reduction in hand movements, the trajectory is predetermined in a feedforward-like manner by controlling the initial velocity vector at takeoff, which allows the landing point to be adjusted.

]]>Entropy doi: 10.3390/e26040299

Authors: Sarahi Aguayo-Tapia Gerardo Avalos-Almazan Jose de Jesus Rangel-Magdaleno

In the signal analysis context, the entropy concept can characterize signal properties for detecting anomalies or non-representative behaviors in fiscal systems. In motor fault detection theory, entropy can measure disorder or uncertainty, aiding in detecting and classifying faults or abnormal operation conditions. This is especially relevant in industrial processes, where early motor fault detection can prevent progressive damage, operational interruptions, or potentially dangerous situations. The study of motor fault detection based on entropy theory holds significant academic relevance too, effectively bridging theoretical frameworks with industrial exigencies. As industrial sectors progress, applying entropy-based methodologies becomes indispensable for ensuring machinery integrity based on control and monitoring systems. This academic endeavor enhances the understanding of signal processing methodologies and accelerates progress in artificial intelligence and other modern knowledge areas. A wide variety of entropy-based methods have been employed for motor fault detection. This process involves assessing the complexity of measured signals from electrical motors, such as vibrations or stator currents, to form feature vectors. These vectors are then fed into artificial-intelligence-based classifiers to distinguish between healthy and faulty motor signals. This paper discusses some recent references to entropy methods and a summary of the most relevant results reported for fault detection over the last 10 years.

]]>Entropy doi: 10.3390/e26040298

Authors: Thomas Götz Tyll Krüger Karol Niedzielewski Radomir Pestow Moritz Schäfer Jan Schneider

During the COVID-19 pandemic, it became evident that the effectiveness of applying intervention measures is significantly influenced by societal acceptance, which, in turn, is affected by the processes of opinion formation. This article explores one among the many possibilities of coupled opinion&ndash;epidemic systems. The findings reveal either intricate periodic patterns or chaotic dynamics, leading to substantial fluctuations in opinion distribution and, consequently, significant variations in the total number of infections over time. Interestingly, the model exhibits a protective pattern.

]]>Entropy doi: 10.3390/e26040297

Authors: Dong-Biao Kang

We have explored the exponential surface brightness profile (SBP) of stellar disks, a topic extensively discussed by many authors yet seldom integrated with the study of correlations between black holes, bulges, and entire disks. Building upon our prior work in the statistical mechanics of disk-shaped systems and aligning with methodologies from other research, we analyze the influence of the central body. This analysis reveals analytical relationships among black holes, bulges, and the entire stellar disk. Additionally, we incorporate a specific angular momentum distribution (SAMD) that aligns more closely with observational data, showing that for the self-gravitating disk, with the same surface density, a reduction in its spin results in only a slight decrease in its radius, whereas with the same SAMD, an increment in its spin significantly limits its extent. A key feature of our model is its prediction that the surface density profile of an isolated disk will invariably exhibit downbending at a sufficient distance, a hypothesis that future observations can test. Our refined equations provide a notably improved fit for SBPs, particularly in the central regions of stellar disks. While our findings underscore the significance of statistical mechanics in comprehending spiral galaxy structures, they also highlight areas in our approach that warrant further discussion and exploration.

]]>Entropy doi: 10.3390/e26040296

Authors: Bartosz Biczuk Szymon Buś Sebastian Żurek Jarosław Piskorski Przemysław Guzik

Background: Early detection of atrial fibrillation (AF) is essential to prevent stroke and other cardiac and embolic complications. We compared the diagnostic properties for AF detection of the percentage of successive RR interval differences greater than or equal to 30 ms or 3.25% of the previous RR interval (pRR30 and pRR3.25%, respectively), and asymmetric entropy descriptors of RR intervals. Previously, both pRR30 and pRR3.25% outperformed many other heart rate variability (HRV) parameters in distinguishing AF from sinus rhythm (SR) in 60 s electrocardiograms (ECGs). Methods: The 60 s segments with RR intervals were extracted from the publicly available Physionet Long-Term Atrial Fibrillation Database (84 recording, 24 h Holter ECG). There were 31,753 60 s segments of AF and 32,073 60 s segments of SR. The diagnostic properties of all parameters were analysed with receiver operator curve analysis, a confusion matrix and logistic regression. The best model with pRR30, pRR3.25% and total entropic features (H) had the largest area under the curve (AUC)&mdash;0.98 compared to 0.959 for pRR30&mdash;and 0.972 for pRR3.25%. However, the differences in AUC between pRR30 and pRR3.25% alone and the combined model were negligible from a practical point of view. Moreover, combining pRR30 and pRR3.25% with H significantly increased the number of false-negative cases by more than threefold. Conclusions: Asymmetric entropy has some potential in differentiating AF from SR in the 60 s RR interval time series, but the addition of these parameters does not seem to make a relevant difference compared to pRR30 and especially pRR3.25%.

]]>Entropy doi: 10.3390/e26040295

Authors: Shiyan Lin Ruiyu Li Limin Gao

The leakage flow has a significant impact on the aerodynamic losses and efficiency of the compressor. This paper investigates the loss mechanism in the tip region based on a high-load cantilevered stator cascade. Firstly, a high-fidelity flow field structure was obtained based on the Enhanced Delay Detached Eddy Simulation (EDDES) method. Subsequently, the Liutex method was employed to study the vortex structures in the tip region. The results indicate the presence of a tip leakage vortex (TLV), passage vortex (PV), and induced vortex (IV) in the tip region. At i=4&deg;,8&deg;, the induced vortex interacts with the PV and low-energy fluid, forming a &ldquo;three-shape&rdquo; mixed vortex. Finally, a qualitative and quantitative analysis of the loss sources in the tip flow field was conducted based on the entropy generation rate, and the impact of the incidence on the losses was explored. The loss sources in the tip flow field included endwall loss, blade profile loss, wake loss, and secondary flow loss. At i=0&deg;, the loss primarily originated from the endwall and blade profile, accounting for 40% and 39%, respectively. As the incidence increased, the absolute value of losses increased, and the proportion of loss caused by secondary flow significantly increased. At i=8&deg;, the proportion of secondary flow loss reached 47%, indicating the most significant impact.

]]>Entropy doi: 10.3390/e26040294

Authors: Derik W. Gryczak Ervin K. Lenzi Michely P. Rosseto Luiz R. Evangelista Rafael S. Zola

The interplay of diffusion with phenomena like stochastic adsorption&ndash;desorption, absorption, and reaction&ndash;diffusion is essential for life and manifests in diverse natural contexts. Many factors must be considered, including geometry, dimensionality, and the interplay of diffusion across bulk and surfaces. To address this complexity, we investigate the diffusion process in heterogeneous media, focusing on non-Markovian diffusion. This process is limited by a surface interaction with the bulk, described by a specific boundary condition relevant to systems such as living cells and biomaterials. The surface can adsorb and desorb particles, and the adsorbed particles may undergo lateral diffusion before returning to the bulk. Different behaviors of the system are identified through analytical and numerical approaches.

]]>Entropy doi: 10.3390/e26040293

Authors: Vito Antonio Cimmelli

In continuum physics the dissipation principle, first proposed by Coleman and Noll in 1963, regards second law of thermodynamics as a unilateral differential constraint on the constitutive equations. In 1996, Muschik and Ehrentraut provided a rigorous proof of such an approach under the assumption that, at an arbitrary instant, t0, in an arbitrary point, P0, of a continuous system, the entropy production is zero if, and only if, P0 is in thermodynamic equilibrium. In 2022, Cimmelli and Rogolino incorporated such an assumption in a more general formulation of the second law of thermodynamics. In this paper, we prove that the same conclusions hold if both the fundamental balance laws and their gradients are substituted into the entropy inequality. Such a methodology is applied to analyze the strain-gradient elasticity.

]]>Entropy doi: 10.3390/e26040292

Authors: Xinye Guo Yan Li Xikui Liu

This paper concentrates on the finite-time&nbsp;H&infin;&nbsp;control problem for a type of stochastic discrete-time Markovian jump systems, characterized by time-delay and partly unknown transition probabilities. Initially, a stochastic finite-time (SFT)&nbsp;H&infin;&nbsp;state feedback controller and an SFT&nbsp;H&infin;&nbsp;observer-based state feedback controller are constructed to realize the closed-loop control of systems. Then, based on the Lyapunov&ndash;Krasovskii functional (LKF) method, some sufficient conditions are established to guarantee that closed-loop systems (CLSs) satisfy SFT boundedness and SFT&nbsp;H&infin;&nbsp;boundedness. Furthermore, the controller gains are obtained with the use of the linear matrix inequality (LMI) approach. In the end, numerical examples reveal the reasonableness and effectiveness of the proposed designing schemes.

]]>Entropy doi: 10.3390/e26040291

Authors: Mengqi Lu Robert B. Mann

We evaluate here the quantum gravity partition function that counts the dimension of the Hilbert space of a simply connected spatial region of a fixed proper volume in the context of Lovelock gravity, generalizing the results for Einstein gravity. It is found that there are sphere saddle metrics for a partition function at a fixed spatial volume in Lovelock theory. Those stationary points take exactly the same forms as in Einstein gravity. The logarithm of Z corresponding to a zero effective cosmological constant indicates that the Bekenstein&ndash;Hawking entropy of the boundary area and that corresponding to a positive effective cosmological constant points to the Wald entropy of the boundary area. We also show the existence of zeroth-order phase transitions between different vacua, a phenomenon distinct from Einstein gravity.

]]>Entropy doi: 10.3390/e26040289

Authors: Sascha Kurz

It has been known since the 1970&rsquo;s that the difference of the non-zero weights of a projective Fq-linear two-weight code has to be a power of the characteristic of the underlying field. Here, we study non-projective two-weight codes and, e.g., show the same result under mild extra conditions. For small dimensions we give exhaustive enumerations of the feasible parameters in the binary case.

]]>Entropy doi: 10.3390/e26040290

Authors: Justin Veiner Fady Alajaji Bahman Gharesifard

A unifying &alpha;-parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN) that uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system. The generator loss function is based on a symmetric class probability estimation type function, L&alpha;, and the resulting GAN system is termed L&alpha;-GAN. Under an optimal discriminator, it is shown that the generator&rsquo;s optimization problem consists of minimizing a Jensen-f&alpha;-divergence, a natural generalization of the Jensen-Shannon divergence, where f&alpha; is a convex function expressed in terms of the loss function L&alpha;. It is also demonstrated that this L&alpha;-GAN problem recovers as special cases a number of GAN problems in the literature, including VanillaGAN, least squares GAN (LSGAN), least kth-order GAN (LkGAN), and the recently introduced (&alpha;D,&alpha;G)-GAN with &alpha;D=1. Finally, experimental results are provided for three datasets&mdash;MNIST, CIFAR-10, and Stacked MNIST&mdash;to illustrate the performance of various examples of the L&alpha;-GAN system.

]]>Entropy doi: 10.3390/e26040287

Authors: James Wright Paul Bourke

A theoretical account of development in mesocortical anatomy is derived from the free energy principle, operating in a neural field with both Hebbian and anti-Hebbian neural plasticity. An elementary structural unit is proposed, in which synaptic connections at mesoscale are arranged in paired patterns with mirror symmetry. Exchanges of synaptic flux in each pattern form coupled spatial eigenmodes, and the line of mirror reflection between the paired patterns operates as a Markov blanket, so that prediction errors in exchanges between the pairs are minimized. The theoretical analysis is then compared to the outcomes from a biological model of neocortical development, in which neuron precursors are selected by apoptosis for cell body and synaptic connections maximizing synchrony and also minimizing axonal length. It is shown that this model results in patterns of connection with the anticipated mirror symmetries, at micro-, meso- and inter-arial scales, among lateral connections, and in cortical depth. This explains the spatial organization and functional significance of neuron response preferences, and is compatible with the structural form of both columnar and noncolumnar cortex. Multi-way interactions of mirrored representations can provide a preliminary anatomically realistic model of cortical information processing.

]]>Entropy doi: 10.3390/e26040288

Authors: Tom Froese

Cognitive science is confronted by several fundamental anomalies deriving from the mind&ndash;body problem. Most prominent is the problem of mental causation and the hard problem of consciousness, which can be generalized into the hard problem of agential efficacy and the hard problem of mental content. Here, it is proposed to accept these explanatory gaps at face value and to take them as positive indications of a complex relation: mind and matter are one, but they are not the same. They are related in an efficacious yet non-reducible, non-observable, and even non-intelligible manner. Natural science is well equipped to handle the effects of non-observables, and so the mind is treated as equivalent to a hidden &lsquo;black box&rsquo; coupled to the body. Two concepts are introduced given that there are two directions of coupling influence: (1) irruption denotes the unobservable mind hiddenly making a difference to observable matter, and (2) absorption denotes observable matter hiddenly making a difference to the unobservable mind. The concepts of irruption and absorption are methodologically compatible with existing information-theoretic approaches to neuroscience, such as measuring cognitive activity and subjective qualia in terms of entropy and compression, respectively. By offering novel responses to otherwise intractable theoretical problems from first principles, and by doing so in a way that is closely connected with empirical advances, irruption theory is poised to set the agenda for the future of the mind sciences.

]]>Entropy doi: 10.3390/e26040286

Authors: Paolo Gibilisco

Due to the classifying theorems by Petz and Kubo&ndash;Ando, we know that there are bijective correspondences between Quantum Fisher Information(s), operator means, and the class of symmetric, normalized operator monotone functions on the positive half line; this last class is usually denoted as&nbsp;Fop. This class of operator monotone function has a significant structure, which is worthy of study; indeed, any step in understanding&nbsp;Fop, besides being interesting per se, immediately translates into a property of the classes of operator means and therefore of Quantum Fisher Information(s). In recent years, the&nbsp;f&harr;f&nbsp;correspondence has been introduced, which associates a non-regular element of&nbsp;Fop&nbsp;to any regular element of the same set. In terms of operator means, this amounts to associating a mean with multiplicative character to a mean that has an additive character. In this paper, we survey a number of different settings where this technique has proven useful in Quantum Information Geometry. In Sections 1&ndash;4, all the needed background is provided. In Sections 5&ndash;14, we describe the main applications of the&nbsp;f&harr;f&tilde;&nbsp;correspondence.

]]>Entropy doi: 10.3390/e26040285

Authors: Yunfei Hou Changsheng Hu

This paper shows that the empirical distribution of cross-sectional analyst coverage in China&rsquo;s stock markets follows an exponential law in a given month from 2011 to 2020. The findings hold in both the emerging (Shanghai) and the developed market (Hong Kong). Moreover, the unique distribution parameter (i.e., mean) is directly related to the amount of market-wide information. Average analyst coverage exhibits a significant negative predictive power for stock-market uncertainty, highlighting the role of security analysts in diminishing the total uncertainty. The exponential law can be derived from the maximum entropy principle (MEP). When analysts, who are constrained by average ability in generating information (i.e., the first-order moment), strive to maximize the amount of market-wide information, this objective yields the exponential distribution. Contrary to the conventional wisdom that security analysts specialize in the generation of firm-specific information, empirical findings suggest that analysts primarily produce market-wide information for 25 countries. Nevertheless, it remains unclear why cross-sectional analyst coverage reflects market-wide information, this paper provides an entropy-based explanation.

]]>Entropy doi: 10.3390/e26040284

Authors: Changrui Zhang Jia Wang

Recently, with more portable diagnostic devices being moved to people anywhere, point-of-care (PoC) imaging has become more convenient and more popular than the traditional &ldquo;bed imaging&rdquo;. Instant image segmentation, as an important technology of computer vision, is receiving more and more attention in PoC diagnosis. However, the image distortion caused by image preprocessing and the low resolution of medical images extracted by PoC devices are urgent problems that need to be solved. Moreover, more efficient feature representation is necessary in the design of instant image segmentation. In this paper, a new feature representation considering the relationships among local features with minimal parameters and a lower computational complexity is proposed. Since a feature window sliding along a diagonal can capture more pluralistic features, a Diagonal-Axial Multi-Layer Perceptron is designed to obtain the global correlation among local features for a more comprehensive feature representation. Additionally, a new multi-scale feature fusion is proposed to integrate nonlinear features with linear ones to obtain a more precise feature representation. Richer features are figured out. In order to improve the generalization of the models, a dynamic residual spatial pyramid pooling based on various receptive fields is constructed according to different sizes of images, which alleviates the influence of image distortion. The experimental results show that the proposed strategy has better performance on instant image segmentation. Notably, it yields an average improvement of 1.31% in Dice than existing strategies on the BUSI, ISIC2018 and MoNuSeg datasets.

]]>Entropy doi: 10.3390/e26040283

Authors: Roberto Bruno Ugo Vaccaro

We consider the problem of constructing prefix-free codes in which a designated symbol, a space, can only appear at the end of codewords. We provide a linear-time algorithm to construct almost-optimal codes with this property, meaning that their average length differs from the minimum possible by at most one. We obtain our results by uncovering a relation between our class of codes and the class of one-to-one codes. Additionally, we derive upper and lower bounds to the average length of optimal prefix-free codes with a space in terms of the source entropy.

]]>Entropy doi: 10.3390/e26040282

Authors: Rabindra N. Mohapatra

Overwhelming astronomical evidence for dark matter and absence of any laboratory evidence for it despite many dedicated searches have fueled speculation that dark matter may reside in a parallel universe interacting with the familiar universe only via gravitational interactions as well as possibly via some ultra-weak forces. In this scenario, we postulate that the visible universe co-exists with a mirror world consisting of an identical duplicate of forces and matter of our world, obeying a mirror symmetry. This picture, motivated by particle physics considerations, not only provides a natural candidate for dark matter but also has the potential to explain the matter dark matter coincidence problem, i.e., why the dark matter content of the universe is only a few times the visible matter content. One requirement for mirror models is that the mirror world must be colder than our world to maintain the success of big bang nucleosynthesis. After a review of the basic features of the model, we present several new results: first is that the consistency between the coldness of the mirror world and the explanation of the matter dark matter coincidence implies an upper bound on the inflation reheat temperature of the universe to be around 106.5 GeV. We also argue that the coldness implies the mirror world consists mainly of mirror Helium and very little mirror hydrogen, which is the exact opposite of what we see in the visible world.

]]>Entropy doi: 10.3390/e26040281

Authors: Marco Villani Elena Alboresi Roberto Serra

The conditions that allow for the sustained growth of a protocell population are investigated in the case of asymmetrical division. The results are compared to those of previous studies concerning models of symmetrical division, where synchronization (between duplication of the genetic material and fission of the lipid container) was found under a variety of different assumptions about the kinetic equations and about the place where molecular replication takes place. Such synchronization allows a sustained proliferation of the protocell population. In the asymmetrical case, there can be no true synchronization, since the time to duplication may depend upon the initial size, but we introduce a notion of homogeneous growth that actually allows for the sustained reproduction of a population of protocells. We first analyze Surface Reaction Models, defined in the text, and we show that in many cases they undergo homogeneous growth under the same kinetic laws that lead to synchronization in the symmetrical case. This is the case also for Internal Reaction Models (IRMs), which, however, require a deeper understanding of what homogeneous growth actually means, as discussed below.

]]>Entropy doi: 10.3390/e26040280

Authors: Stanislav Filatov Marcis Auzinsh

We extend Bloch sphere formalism to pure two-qubit systems. Combining insights from Geometric Algebra and the analysis of entanglement in different conjugate bases we identify two Bloch sphere geometry that is suitable for representing maximally entangled states. It turns out that the relative direction of the coordinate axes of the two Bloch spheres may be used to describe the states. Moreover, the coordinate axes of one Bloch sphere should be rignt-handed and those of the other one should be left-handed. We describe and depict separable and maximally entangled states as well as entangling and non-entangling rotations. We also offer a graphical representation of the workings of a CNOT gate for different inputs. Finally, we provide a way to also represent partially entangled states and describe entanglement measures related to the surface area of the sphere enclosing the state representation.

]]>Entropy doi: 10.3390/e26040279

Authors: Joanna Andrzejak Leszek J. Chmielewski Joanna Landmesser-Rusek Arkadiusz Orłowski

Structural properties of the currency market were examined with the use of topological networks. Relationships between currencies were analyzed by constructing minimal spanning trees (MSTs). The dissimilarities between time series of currency returns were measured in various ways: by applying Euclidean distance, Pearson&rsquo;s linear correlation coefficient, Spearman&rsquo;s rank correlation coefficient, Kendall&rsquo;s coefficient, partial correlation, dynamic time warping measure, and Kullback&ndash;Leibler relative entropy. For the constructed MSTs, their topological characteristics were analyzed and conclusions were drawn regarding the influence of the dissimilarity measure used. It turned out that the strength of most types of correlations was highly dependent on the choice of the numeraire currency, while partial correlations were invariant in this respect. It can be stated that a network built on the basis of partial correlations provides a more adequate illustration of pairwise relationships in the foreign exchange market. The data for quotations of 37 of the most important world currencies and four precious metals in the period from 1 January 2019 to 31 December 2022 were used. The outbreak of the COVID-19 pandemic in 2020 and Russia&rsquo;s invasion of Ukraine in 2022 triggered changes in the topology of the currency network. As a result of these crises, the average distances between tree nodes decreased and the centralization of graphs increased. Our results confirm that currencies are often pegged to other currencies due to countries&rsquo; geographic locations and economic ties. The detected structures can be useful in descriptions of the currency market, can help in constructing a stable portfolio of the foreign exchange rates, and can be a valuable tool in searching for economic factors influencing specific groups of countries.

]]>Entropy doi: 10.3390/e26040278

Authors: Junkai Mao Yuexing Han Bing Wang

Accurate epidemic forecasting plays a vital role for governments to develop effective prevention measures for suppressing epidemics. Most of the present spatio&ndash;temporal models cannot provide a general framework for stable and accurate forecasting of epidemics with diverse evolutionary trends. Incorporating epidemiological domain knowledge ranging from single-patch to multi-patch into neural networks is expected to improve forecasting accuracy. However, relying solely on single-patch knowledge neglects inter-patch interactions, while constructing multi-patch knowledge is challenging without population mobility data. To address the aforementioned problems, we propose a novel hybrid model called metapopulation-based spatio&ndash;temporal attention network (MPSTAN). This model aims to improve the accuracy of epidemic forecasting by incorporating multi-patch epidemiological knowledge into a spatio&ndash;temporal model and adaptively defining inter-patch interactions. Moreover, we incorporate inter-patch epidemiological knowledge into both model construction and the loss function to help the model learn epidemic transmission dynamics. Extensive experiments conducted on two representative datasets with different epidemiological evolution trends demonstrate that our proposed model outperforms the baselines and provides more accurate and stable short- and long-term forecasting. We confirm the effectiveness of domain knowledge in the learning model and investigate the impact of different ways of integrating domain knowledge on forecasting. We observe that using domain knowledge in both model construction and the loss function leads to more efficient forecasting, and selecting appropriate domain knowledge can improve accuracy further.

]]>Entropy doi: 10.3390/e26040277

Authors: Ralf Eichhorn

When writing down a Langevin equation for the time evolution of a &ldquo;system&rdquo; in contact with a thermal bath, one typically makes the implicit (and often tacit) assumption that the thermal environment is in equilibrium at all times. Here, we take this assumption as a starting point to formulate the problem of a system evolving in contact with a thermal bath from the perspective of the bath, which, since it is in equilibrium, can be described by the microcanonical ensemble. We show that the microcanonical ensemble of the bath, together with the Hamiltonian equations of motion for all the constituents of the bath and system together, give rise to a Langevin equation for the system evolution alone. The friction coefficient turns out to be given in terms of auto-correlation functions of the interaction forces between the bath particles and the system, and the Einstein relation is recovered. Moreover, the connection to the Fokker&ndash;Planck equation is established.

]]>Entropy doi: 10.3390/e26040276

Authors: Michel Y. Louge Yujie Wang

We derive the ab initio equilibrium statistical mechanics of the gas&ndash;liquid&ndash;solid contact angle on planar periodic, monodisperse, textured surfaces subject to electrowetting. To that end, we extend an earlier theory that predicts the advance or recession of the contact line amount to distinct first-order phase transitions of the filling state in the ensemble of nearby surface cavities. Upon calculating the individual capacitance of a cavity subject to the influence of its near neighbors, we show how hysteresis, which is manifested by different advancing and receding contact angles, is affected by electrowetting. The analysis reveals nine distinct regimes characterizing contact angle behavior, three of which arise only when a voltage is applied to the conductive liquid drop. As the square voltage is progressively increased, the theory elucidates how the drop occasionally undergoes regime transitions triggering jumps in the contact angle, possibly changing its hysteresis, or saturating it at a value weakly dependent on further voltage growth. To illustrate these phenomena and validate the theory, we confront its predictions with four data sets. A benefit of the theory is that it forsakes trial and error when designing textured surfaces with specific contact angle behavior.

]]>Entropy doi: 10.3390/e26030275

Authors: Howard Baer Vernon Barger Dakotah Martinez Shadman Salam

Superstring flux compactifications can stabilize all moduli while leading to an enormous number of vacua solutions, each leading to different 4&minus;d laws of physics. While the string landscape provides at present the only plausible explanation for the size of the cosmological constant, it may also predict the form of weak scale supersymmetry which is expected to emerge. Rather general arguments suggest a power-law draw to large soft terms, but these are subject to an anthropic selection of a not-too-large value for the weak scale. The combined selection allows one to compute relative probabilities for the emergence of supersymmetric models from the landscape. Models with weak scale naturalness appear most likely to emerge since they have the largest parameter space on the landscape. For finetuned models such as high-scale SUSY or split SUSY, the required weak scale finetuning shrinks their parameter space to tiny volumes, making them much less likely to appear compared to natural models. Probability distributions for sparticle and Higgs masses from natural models show a preference for Higgs mass mh&sim;125 GeV, with sparticles typically beyond the present LHC limits, in accord with data. From these considerations, we briefly describe how natural SUSY is expected to be revealed at future LHC upgrades. This article is a contribution to the Special Edition of the journal Entropy, honoring Paul Frampton on his 80th birthday.

]]>Entropy doi: 10.3390/e26030274

Authors: Kristian Stølevik Olsen Alex Hansen Eirik Grude Flekkøy

Hyper-ballistic diffusion is shown to arise from a simple model of microswimmers moving through a porous media while competing for resources. By using a mean-field model where swimmers interact through the local concentration, we show that a non-linear Fokker&ndash;Planck equation arises. The solution exhibits hyper-ballistic superdiffusive motion, with a diffusion exponent of four. A microscopic simulation strategy is proposed, which shows excellent agreement with theoretical analysis.

]]>Entropy doi: 10.3390/e26030273

Authors: Jean-Pierre Gazeau

Currently, there is no widely accepted consensus regarding a consistent thermodynamic framework within the special relativity paradigm. However, by postulating that the inverse temperature 4-vector, denoted as &beta;, is future-directed and time-like, intriguing insights emerge. Specifically, it is demonstrated that the q-dependent Tsallis distribution can be conceptualized as a de Sitterian deformation of the relativistic Maxwell&ndash;J&uuml;ttner distribution. In this context, the curvature of the de Sitter space-time is characterized by &Lambda;/3, where &Lambda; represents the cosmological constant within the &Lambda;CDM standard model for cosmology. For a simple gas composed of particles with proper mass m, and within the framework of quantum statistical de Sitterian considerations, the Tsallis parameter q exhibits a dependence on the cosmological constant given by q=1+&#8467;c&Lambda;/n, where &#8467;c=&#8463;/mc is the Compton length of the particle and n is a positive numerical factor, the determination of which awaits observational confirmation. This formulation establishes a novel connection between the Tsallis distribution, quantum statistics, and the cosmological constant, shedding light on the intricate interplay between relativistic thermodynamics and fundamental cosmological parameters.

]]>Entropy doi: 10.3390/e26030272

Authors: Longwen Zhou

The intricate interplay between unitary evolution and projective measurements could induce entanglement phase transitions in the nonequilibrium dynamics of quantum many-particle systems. In this work, we uncover loss-induced entanglement transitions in non-Hermitian topological superconductors. In prototypical Kitaev chains with onsite particle losses and varying hopping and pairing ranges, the bipartite entanglement entropy of steady states is found to scale logarithmically versus the system size in topologically nontrivial phases and become independent of the system size in the trivial phase. Notably, the scaling coefficients of log-law entangled phases are distinguishable when the underlying system resides in different topological phases. Log-law to log-law and log-law to area-law entanglement phase transitions are further identified when the system switches between different topological phases and goes from a topologically nontrivial to a trivial phase, respectively. These findings not only establish the relationships among spectral, topological and entanglement properties in a class of non-Hermitian topological superconductors but also provide an efficient means to dynamically reveal their distinctive topological features.

]]>Entropy doi: 10.3390/e26030271

Authors: André F. C. Gomes Mário A. T. Figueiredo

The partial information decomposition (PID) framework is concerned with decomposing the information that a set of (two or more) random variables (the sources) has about another variable (the target) into three types of information: unique, redundant, and synergistic. Classical information theory alone does not provide a unique way to decompose information in this manner and additional assumptions have to be made. One often overlooked way to achieve this decomposition is using a so-called measure of union information&mdash;which quantifies the information that is present in at least one of the sources&mdash;from which a synergy measure stems. In this paper, we introduce a new measure of union information based on adopting a communication channel perspective, compare it with existing measures, and study some of its properties. We also include a comprehensive critical review of characterizations of union information and synergy measures that have been proposed in the literature.

]]>Entropy doi: 10.3390/e26030270

Authors: Anna Bertani Valeria Mazzeo Riccardo Gallotti

In the digital era, information consumption is predominantly channeled through online news media and disseminated on social media platforms. Understanding the complex dynamics of the news media environment and users&rsquo; habits within the digital ecosystem is a challenging task that requires, at the same time, large databases and accurate methodological approaches. This study contributes to this expanding research landscape by employing network science methodologies and entropic measures to analyze the behavioral patterns of social media users sharing news pieces and dig into the diverse news consumption habits within different online social media user groups. Our analyses reveal that users are more inclined to share news classified as fake when they have previously posted conspiracy or junk science content and vice versa, creating a series of &ldquo;misinformation hot streaks&rdquo;. To better understand these dynamics, we used three different measures of entropy to gain insights into the news media habits of each user, finding that the patterns of news consumption significantly differ among users when focusing on disinformation spreaders as opposed to accounts sharing reliable or low-risk content. Thanks to these entropic measures, we quantify the variety and the regularity of the news media diet, finding that those disseminating unreliable content exhibit a more varied and, at the same time, a more regular choice of web-domains. This quantitative insight into the nuances of news consumption behaviors exhibited by disinformation spreaders holds the potential to significantly inform the strategic formulation of more robust and adaptive social media moderation policies.

]]>Entropy doi: 10.3390/e26030269

Authors: Haiju Fan Jinsong Wang

Recent studies on watermarking techniques based on image carriers have demonstrated new approaches that combine adversarial perturbations against steganalysis with embedding distortions. However, while these methods successfully counter convolutional neural network-based steganalysis, they do not adequately protect the data of the carrier itself. Recognizing the high sensitivity of Deep Neural Networks (DNNs) to small perturbations, we propose HAG-NET, a method based on image carriers, which is jointly trained by the encoder, decoder, and attacker. In this paper, the encoder generates Adversarial Steganographic Examples (ASEs) that are adversarial to the target classification network, thereby providing protection for the carrier data. Additionally, the decoder can recover secret data from ASEs. The experimental results demonstrate that ASEs produced by HAG-NET achieve an average success rate of over 99% on both the MNIST and CIFAR-10 datasets. ASEs generated with the attacker exhibit greater robustness in terms of attack ability, with an average increase of about 3.32%. Furthermore, our method, when compared with other generative stego examples under similar perturbation strength, contains significantly more information according to image information entropy measurements.

]]>Entropy doi: 10.3390/e26030268

Authors: Chao Zhao Ali Al-Bashabsheh Chung Chan

We address the challenge of identifying meaningful communities by proposing a model based on convex game theory and a measure of community strength. Many existing community detection methods fail to provide unique solutions, and it remains unclear how the solutions depend on initial conditions. Our approach identifies strong communities with a hierarchical structure, visualizable as a dendrogram, and computable in polynomial time using submodular function minimization. This framework extends beyond graphs to hypergraphs or even polymatroids. In the case when the model is graphical, a more efficient algorithm based on the max-flow min-cut algorithm can be devised. Though not achieving near-linear time complexity, the pursuit of practical algorithms is an intriguing avenue for future research. Our work serves as the foundation, offering an analytical framework that yields unique solutions with clear operational meaning for the communities identified.

]]>Entropy doi: 10.3390/e26030267

Authors: Karl Svozil

Space-time in quantum mechanics is about bridging Hilbert and configuration space. Thereby, an entirely new perspective is obtained by replacing the Newtonian space-time theater with the image of a presumably high-dimensional Hilbert space, through which space-time becomes an epiphenomenon construed by internal observers.

]]>Entropy doi: 10.3390/e26030266

Authors: Henrik Jeldtoft Jensen Piergiulio Tempesta

Entropy can signify different things. For instance, heat transfer in thermodynamics or a measure of information in data analysis. Many entropies have been introduced, and it can be difficult to ascertain their respective importance and merits. Here, we consider entropy in an abstract sense, as a functional on a probability space, and we review how being able to handle the trivial case of non-interacting systems, together with the subtle requirement of extensivity, allows for a systematic classification of the functional form.

]]>Entropy doi: 10.3390/e26030265

Authors: Rubén Gómez González Vicente Garzó

The Boltzmann kinetic equation for dilute granular suspensions under simple (or uniform) shear flow (USF) is considered to determine the non-Newtonian transport properties of the system. In contrast to previous attempts based on a coarse-grained description, our suspension model accounts for the real collisions between grains and particles of the surrounding molecular gas. The latter is modeled as a bath (or thermostat) of elastic hard spheres at a given temperature. Two independent but complementary approaches are followed to reach exact expressions for the rheological properties. First, the Boltzmann equation for the so-called inelastic Maxwell models (IMM) is considered. The fact that the collision rate of IMM is independent of the relative velocity of the colliding spheres allows us to exactly compute the collisional moments of the Boltzmann operator without the knowledge of the distribution function. Thanks to this property, the transport properties of the sheared granular suspension can be exactly determined. As a second approach, a Bhatnagar&ndash;Gross&ndash;Krook (BGK)-type kinetic model adapted to granular suspensions is solved to compute the velocity moments and the velocity distribution function of the system. The theoretical results (which are given in terms of the coefficient of restitution, the reduced shear rate, the reduced background temperature, and the diameter and mass ratios) show, in general, a good agreement with the approximate analytical results derived for inelastic hard spheres (IHS) by means of Grad&rsquo;s moment method and with computer simulations performed in the Brownian limiting case (m/mg&rarr;&infin;, where mg and m are the masses of the particles of the molecular and granular gases, respectively). In addition, as expected, the IMM and BGK results show that the temperature and non-Newtonian viscosity exhibit an S shape in a plane of stress&ndash;strain rate (discontinuous shear thickening, DST). The DST effect becomes more pronounced as the mass ratio m/mg increases.

]]>Entropy doi: 10.3390/e26030264

Authors: Amira Val Baker Mate Csanad Nicolas Fellas Nour Atassi Ia Mgvdliashvili Paul Oomen

In general, sound waves propagate radially outwards from a point source. These waves will continue in the same direction, decreasing in intensity, unless a boundary condition is met. To arrive at a universal understanding of the relation between frequency and wave propagation within spatial boundaries, we explore the maximum entropy states that are realized as resonant modes. For both circular and polygonal Chladni plates, a model is presented that successfully recreates the nodal line patterns to a first approximation. We discuss the benefits of such a model and the future work necessary to develop the model to its full predictive ability.

]]>Entropy doi: 10.3390/e26030263

Authors: Arash Edrisi Hamza Patwa Jose A. Morales Escalante

Kinetic theory provides modeling of open quantum systems subject to Markovian noise via the Wigner&ndash;Fokker&ndash;Planck equation, which is an alternate of the Lindblad master equation setting, having the advantage of great physical intuition as it is the quantum equivalent of the classical phase space description. We perform a numerical inspection of the Wehrl entropy for the benchmark problem of a harmonic potential, since the existence of a steady state and its analytical formula have been proven theoretically in this case. When there is friction in the noise terms, no theoretical results on the monotonicity of absolute entropy are available. We provide numerical results of the time evolution of the entropy in the case with friction using a stochastic (Euler&ndash;Maruyama-based Monte Carlo) numerical solver. For all the chosen initial conditions studied (all of them Gaussian states), up to the inherent numerical error of the method, one cannot disregard the possibility of monotonic behavior even in the case under study, where the noise includes friction terms.

]]>Entropy doi: 10.3390/e26030262

Authors: Siyi Xu Wenwen Liu Chengpei Wu Junli Li

The No Free Lunch Theorem tells us that no algorithm can beat other algorithms on all types of problems. The algorithm selection structure is proposed to select the most suitable algorithm from a set of algorithms for an unknown optimization problem. This paper introduces an innovative algorithm selection approach called the CNN-HT, which is a two-stage algorithm selection framework. In the first stage, a Convolutional Neural Network (CNN) is employed to classify problems. In the second stage, the Hypothesis Testing (HT) technique is used to suggest the best-performing algorithm based on the statistical analysis of the performance metric of algorithms that address various problem categories. The two-stage approach can adapt to different algorithm combinations without the need to retrain the entire model, and modifications can be made in the second stage only, which is an improvement of one-stage approaches. To provide a more general structure for the classification model, we adopt Exploratory Landscape Analysis (ELA) features of the problem as input and utilize feature selection techniques to reduce the redundant ones. In problem classification, the average accuracy of classifying problems using CNN is 96%, which demonstrates the advantages of CNN compared to Random Forest and Support Vector Machines. After feature selection, the accuracy increases to 98.8%, further improving the classification performance while reducing the computational cost. This demonstrates the effectiveness of the first stage of the CNN-HT method, which provides a basis for algorithm selection. In the experiments, CNN-HT shows the advantages of the second stage algorithm as well as good performance with better average rankings in different algorithm combinations compared to the individual algorithms and another algorithm combination approach.

]]>Entropy doi: 10.3390/e26030261

Authors: Sergey Il’ich Kruglov

We study Einstein&rsquo;s gravity coupled to nonlinear electrodynamics with two parameters in anti-de Sitter spacetime. Magnetically charged black holes in an extended phase space are investigated. We obtain the mass and metric functions and the asymptotic and corrections to the Reissner&ndash;Nordstr&ouml;m metric function when the cosmological constant vanishes. The first law of black hole thermodynamics in an extended phase space is formulated and the magnetic potential and the thermodynamic conjugate to the coupling are obtained. We prove the generalized Smarr relation. The heat capacity and the Gibbs free energy are computed and the phase transitions are studied. It is shown that the electric fields of charged objects at the origin and the electrostatic self-energy are finite within the nonlinear electrodynamics proposed.

]]>Entropy doi: 10.3390/e26030260

Authors: Márcio S. Gomes-Filho Pablo de Castro Danilo B. Liarte Fernando A. Oliveira

The Kardar&ndash;Parisi&ndash;Zhang (KPZ) equation describes a wide range of growth-like phenomena, with applications in physics, chemistry and biology. There are three central questions in the study of KPZ growth: the determination of height probability distributions; the search for ever more precise universal growth exponents; and the apparent absence of a fluctuation&ndash;dissipation theorem (FDT) for spatial dimension d&gt;1. Notably, these questions were answered exactly only for 1+1 dimensions. In this work, we propose a new FDT valid for the KPZ problem in d+1 dimensions. This is achieved by rearranging terms and identifying a new correlated noise which we argue to be characterized by a fractal dimension dn. We present relations between the KPZ exponents and two emergent fractal dimensions, namely df, of the rough interface, and dn. Also, we simulate KPZ growth to obtain values for transient versions of the roughness exponent &alpha;, the surface fractal dimension df and, through our relations, the noise fractal dimension dn. Our results indicate that KPZ may have at least two fractal dimensions and that, within this proposal, an FDT is restored. Finally, we provide new insights into the old question about the upper critical dimension of the KPZ universality class.

]]>Entropy doi: 10.3390/e26030259

Authors: Alejandro J. Rojas

In this work, we consider the design of power-constrained networked control systems (NCSs) and a differential entropy-based fault-detection mechanism. For the NCS design of the control loop, we consider faults in the plant gain and unstable plant pole locations, either due to natural causes or malicious intent. Since the power-constrained approach utilized in the NCS design is a stationary approach, we then discuss the finite-time approximation of the power constraints for the relevant control loop signals. The network under study is formed by two additive white Gaussian noise (AWGN) channels located on the direct and feedback paths of the closed control loop. The finite-time approximation of the controller output signal allows us to estimate its differential entropy, which is used in our proposed fault-detection mechanism. After fault detection, we propose a fault-identification mechanism that is capable of correctly discriminating faults. Finally, we discuss the extension of the contributions developed here to future research directions, such as fault recovery and control resilience.

]]>Entropy doi: 10.3390/e26030258

Authors: Yutaro Yamada Fred Weiying Zhang Yuval Kluger Ilker Yildirim

Ensuring robustness of image classifiers against adversarial attacks and spurious correlation has been challenging. One of the most effective methods for adversarial robustness is a type of data augmentation that uses adversarial examples during training. Here, inspired by computational models of human vision, we explore a synthesis of this approach by leveraging a structured prior over image formation: the 3D geometry of objects and how it projects to images. We combine adversarial training with a weight initialization that implicitly encodes such a prior about 3D objects via 3D reconstruction pre-training. We evaluate our approach using two different datasets and compare it to alternative pre-training protocols that do not encode a prior about 3D shape. To systematically explore the effect of 3D pre-training, we introduce a novel dataset called Geon3D, which consists of simple shapes that nevertheless capture variation in multiple distinct dimensions of geometry. We find that while 3D reconstruction pre-training does not improve robustness for the simplest dataset setting, we consider (Geon3D on a clean background) that it improves upon adversarial training in more realistic (Geon3D with textured background and ShapeNet) conditions. We also find that 3D pre-training coupled with adversarial training improves the robustness to spurious correlations between shape and background textures. Furthermore, we show that the benefit of using 3D-based pre-training outperforms 2D-based pre-training on ShapeNet. We hope that these results encourage further investigation of the benefits of structured, 3D-based models of vision for adversarial robustness.

]]>Entropy doi: 10.3390/e26030257

Authors: Lucas Maquedano Ana C. S. Costa

The effect of quantum steering describes a possible action at a distance via local measurements. In the last few years, several criteria have been proposed to detect this type of correlation in quantum systems. However, there are few approaches presented in order to measure the degree of steerability of a given system. In this work, we are interested in investigating possible ways to quantify quantum steering, where we based our analysis on different criteria presented in the literature.

]]>Entropy doi: 10.3390/e26030256

Authors: Beatriz Arregui-García Antonio Longa Quintino Francesco Lotito Sandro Meloni Giulia Cencetti

The analysis of complex and time-evolving interactions, such as those within social dynamics, represents a current challenge in the science of complex systems. Temporal networks stand as a suitable tool for schematizing such systems, encoding all the interactions appearing between pairs of individuals in discrete time. Over the years, network science has developed many measures to analyze and compare temporal networks. Some of them imply a decomposition of the network into small pieces of interactions; i.e., only involving a few nodes for a short time range. Along this line, a possible way to decompose a network is to assume an egocentric perspective; i.e., to consider for each node the time evolution of its neighborhood. This was proposed by Longa et al. by defining the &ldquo;egocentric temporal neighborhood&rdquo;, which has proven to be a useful tool for characterizing temporal networks relative to social interactions. However, this definition neglects group interactions (quite common in social domains), as they are always decomposed into pairwise connections. A more general framework that also allows considering larger interactions is represented by higher-order networks. Here, we generalize the description of social interactions to hypergraphs. Consequently, we generalize their decomposition into &ldquo;hyper egocentric temporal neighborhoods&rdquo;. This enables the analysis of social interactions, facilitating comparisons between different datasets or nodes within a dataset, while considering the intrinsic complexity presented by higher-order interactions. Even if we limit the order of interactions to the second order (triplets of nodes), our results reveal the importance of a higher-order representation.In fact, our analyses show that second-order structures are responsible for the majority of the variability at all scales: between datasets, amongst nodes, and over time.

]]>Entropy doi: 10.3390/e26030255

Authors: J. Gerhard Müller

It is argued that all physical knowledge ultimately stems from observation and that the simplest possible observation is that an event has happened at a certain space&ndash;time location X&rarr;=x&rarr;,t. Considering historic experiments, which have been groundbreaking in the evolution of our modern ideas of matter on the atomic, nuclear, and elementary particle scales, it is shown that such experiments produce as outputs streams of macroscopically observable events which accumulate in the course of time into spatio-temporal patterns of events whose forms allow decisions to be taken concerning conceivable alternatives of explanation. Working towards elucidating the physical and informational characteristics of those elementary observations, we show that these represent hugely amplified images of the initiating micro-events and that the resulting macro-images have a cognitive value of 1 bit and a physical value of Wobs=Eobs&tau;obs&#8811;h. In this latter equation, Eobs stands for the energy spent in turning the initiating micro-events into macroscopically observable events, &tau;obs for the lifetimes during which the generated events remain macroscopically observable, and h for Planck&rsquo;s constant. The relative value Gobs=Wobs/h finally represents a measure of amplification that was gained in the observation process.

]]>Entropy doi: 10.3390/e26030254

Authors: Yuli Yang Ruiyun Chang Xiufang Feng Peizhen Li Yongle Chen Hao Zhang

The drawbacks of a one-dimensional chaotic map are its straightforward structure, abrupt intervals, and ease of signal prediction. Richer performance and a more complicated structure are required for multidimensional chaotic mapping. To address the shortcomings of current chaotic systems, an n-dimensional cosine-transform-based chaotic system (nD-CTBCS) with a chaotic coupling model is suggested in this study. To create chaotic maps of any desired dimension, nD-CTBCS can take advantage of already-existing 1D chaotic maps as seed chaotic maps. Three two-dimensional chaotic maps are provided as examples to illustrate the impact. The findings of the evaluation and experiments demonstrate that the newly created chaotic maps function better, have broader chaotic intervals, and display hyperchaotic behavior. To further demonstrate the practicability of nD-CTBCS, a reversible data hiding scheme is proposed for the secure communication of medical images. The experimental results show that the proposed method has higher security than the existing methods.

]]>Entropy doi: 10.3390/e26030253

Authors: Lou Zhao Yuliang Zhang Minjie Zhang Chunshan Liu

Millimeter-wave (mmWave) communication systems leverage the directional beamforming capabilities of antenna arrays equipped at the base stations (BS) to counteract the inherent high propagation path loss characteristic of mmWave channels. In downlink mmWave transmissions, i.e., from the BS to users, distinguishing users within the same beam direction poses a significant challenge. Additionally, digital baseband precoding techniques are limited in their ability to mitigate inter-user interference within identical beam directions, representing a fundamental constraint in mmWave downlink transmissions. This study introduces an innovative analog beamforming-based interference mitigation strategy for downlink transmissions in reconfigurable intelligent surface (RIS)-assisted hybrid analog&ndash;digital (HAD) mmWave systems. This is achieved through the joint design of analog beamformers and the corresponding coefficients at both the RIS and the BS. We first present derived closed-form approximation expressions for the achievable rate performance in the proposed scenario and establish a stringent upper bound on this performance in a large number of RIS elements regimes. The exclusive use of analog beamforming in the downlink phase allows our proposed transmission algorithm to function efficiently when equipped with low-resolution analog-to-digital/digital-to-analog converters (A/Ds) at the BS. The energy efficiency of the downlink transmission is evaluated through the deployment of six-bit A/Ds and six-bit pulse-amplitude modulation (PAM) signals across varying numbers of activated RIS elements. Numerical simulation results validate the effectiveness of our proposed algorithms in comparison to various benchmark schemes.

]]>Entropy doi: 10.3390/e26030252

Authors: Ravid Shwartz Ziv Yann LeCun

Deep neural networks excel in supervised learning tasks but are constrained by the need for extensive labeled data. Self-supervised learning emerges as a promising alternative, allowing models to learn without explicit labels. Information theory has shaped deep neural networks, particularly the information bottleneck principle. This principle optimizes the trade-off between compression and preserving relevant information, providing a foundation for efficient network design in supervised contexts. However, its precise role and adaptation in self-supervised learning remain unclear. In this work, we scrutinize various self-supervised learning approaches from an information-theoretic perspective, introducing a unified framework that encapsulates the self-supervised information-theoretic learning problem. This framework includes multiple encoders and decoders, suggesting that all existing work on self-supervised learning can be seen as specific instances. We aim to unify these approaches to understand their underlying principles better and address the main challenge: many works present different frameworks with differing theories that may seem contradictory. By weaving existing research into a cohesive narrative, we delve into contemporary self-supervised methodologies, spotlight potential research areas, and highlight inherent challenges. Moreover, we discuss how to estimate information-theoretic quantities and their associated empirical problems. Overall, this paper provides a comprehensive review of the intersection of information theory, self-supervised learning, and deep neural networks, aiming for a better understanding through our proposed unified approach.

]]>Entropy doi: 10.3390/e26030251

Authors: Lucas Alonso Guilherme C. Matos François Impens Paulo A. Maia Neto Reinaldo de Melo e Souza

A mirror subjected to a fast mechanical oscillation emits photons out of the quantum vacuum&mdash;a phenomenon known as the dynamical Casimir effect (DCE). The mirror is usually treated as an infinite metallic surface. Here, we show that, in realistic experimental conditions (mirror size and oscillation frequency), this assumption is inadequate and drastically overestimates the DCE radiation. Taking the opposite limit, we use instead the dipolar approximation to obtain a simpler and more realistic treatment of DCE for macroscopic bodies. Our approach is inspired by a microscopic theory of DCE, which is extended to the macroscopic realm by a suitable effective Hamiltonian description of moving anisotropic scatterers. We illustrate the benefits of our approach by considering the DCE from macroscopic bodies of different geometries.

]]>Entropy doi: 10.3390/e26030250

Authors: Wuqu Wang Zhe Tao Nan Liu Wei Kang

D2D coded caching, originally introduced by Ji, Caire, and Molisch, significantly improves communication efficiency by applying the multi-cast technology proposed by Maddah-Ali and Niesen to the D2D network. Most prior works on D2D coded caching are based on the assumption that all users will request content at the beginning of the delivery phase. However, in practice, this is often not the case. Motivated by this consideration, this paper formulates a new problem called request-robust D2D coded caching. The considered problem includes K users and a content server with access to N files. Only r users, known as requesters, request a file each at the beginning of the delivery phase. The objective is to minimize the average and worst-case delivery rate, i.e., the average and worst-case number of broadcast bits from all users among all possible demands. For this novel D2D coded caching problem, we propose a scheme based on uncoded cache placement and exploiting common demands and one-shot delivery. We also propose information-theoretic converse results under the assumption of uncoded cache placement. Furthermore, we adapt the scheme proposed by Yapar et al. for uncoded cache placement and one-shot delivery to the request-robust D2D coded caching problem and prove that the performance of the adapted scheme is order optimal within a factor of two under uncoded cache placement and within a factor of four in general. Finally, through numerical evaluations, we show that the proposed scheme outperforms known D2D coded caching schemes applied to the request-robust scenario for most cache size ranges.

]]>Entropy doi: 10.3390/e26030249

Authors: Abhisek Chakraborty Anirban Bhattacharya Debdeep Pati

We commonly encounter the problem of identifying an optimally weight-adjusted version of the empirical distribution of observed data, adhering to predefined constraints on the weights. Such constraints often manifest as restrictions on the moments, tail behavior, shapes, number of modes, etc., of the resulting weight-adjusted empirical distribution. In this article, we substantially enhance the flexibility of such a methodology by introducing a nonparametrically imbued distributional constraint on the weights and developing a general framework leveraging the maximum entropy principle and tools from optimal transport. The key idea is to ensure that the maximum entropy weight-adjusted empirical distribution of the observed data is close to a pre-specified probability distribution in terms of the optimal transport metric, while allowing for subtle departures. The proposed scheme for the re-weighting of observations subject to constraints is reminiscent of the empirical likelihood and related ideas, but offers greater flexibility in applications where parametric distribution-guided constraints arise naturally. The versatility of the proposed framework is demonstrated in the context of three disparate applications where data re-weighting is warranted to satisfy side constraints on the optimization problem at the heart of the statistical task&mdash;namely, portfolio allocation, semi-parametric inference for complex surveys, and ensuring algorithmic fairness in machine learning algorithms.

]]>Entropy doi: 10.3390/e26030248

Authors: Peng Peng Tianlong Fan Linyuan Lü

Diverse higher-order structures, foundational for supporting a network&rsquo;s &ldquo;meta-functions&rdquo;, play a vital role in structure, functionality, and the emergence of complex dynamics. Nevertheless, the problem of dismantling them has been consistently overlooked. In this paper, we introduce the concept of dismantling higher-order structures, with the objective of disrupting not only network connectivity but also eradicating all higher-order structures in each branch, thereby ensuring thorough functional paralysis. Given the diversity and unknown specifics of higher-order structures, identifying and targeting them individually is not practical or even feasible. Fortunately, their close association with k-cores arises from their internal high connectivity. Thus, we transform higher-order structure measurement into measurements on k-cores with corresponding orders. Furthermore, we propose the Belief Propagation-guided Higher-order Dismantling (BPHD) algorithm, minimizing dismantling costs while achieving maximal disruption to connectivity and higher-order structures, ultimately converting the network into a forest. BPHD exhibits the explosive vulnerability of network higher-order structures, counterintuitively showcasing decreasing dismantling costs with increasing structural complexity. Our findings offer a novel approach for dismantling malignant networks, emphasizing the substantial challenges inherent in safeguarding against such malicious attacks.

]]>