Next Issue
Volume 24, November
Previous Issue
Volume 24, September
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 24, Issue 10 (October 2022) – 184 articles

Cover Story (view full-size image): Quantum physics, in its current formulation, is time-reversible. We propose a definition of quantum entropy that accounts for uncertainty in the specification of a quantum state and the probabilistic nature of its observables. This entropy increases monotonically during a time evolution of fermions in coherent states. We review some physical scenarios believed to demonstrate the reversibility of quantum physics, and we show that the entropy increases. We hypothesize that the entropy of a closed system never decreases, i.e., that randomness cannot be reduced, thus inducing a time arrow in quantum physics. We also suggest that creations and annihilations of particles are triggered to bar the entropy from decreasing. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 2136 KiB  
Article
Unveiling Latent Structure of Venture Capital Syndication Networks
by Weiwei Gu, Ao Yang, Lingyun Lu and Ruiqi Li
Entropy 2022, 24(10), 1506; https://doi.org/10.3390/e24101506 - 21 Oct 2022
Cited by 2 | Viewed by 1879
Abstract
Venture capital (VC) is a form of private equity financing provided by VC institutions to startups with high growth potential due to innovative technology or novel business models but also high risks. To against uncertainties and benefit from mutual complementarity and sharing resources [...] Read more.
Venture capital (VC) is a form of private equity financing provided by VC institutions to startups with high growth potential due to innovative technology or novel business models but also high risks. To against uncertainties and benefit from mutual complementarity and sharing resources and information, making joint-investments with other VC institutions on the same startup are pervasive, which forms an ever-growing complex syndication network. Attaining objective classifications of VC institutions and revealing the latent structure of joint-investment behaviors between them can deepen our understanding of the VC industry and boost the healthy development of the market and economy. In this work, we devise an iterative Loubar method based on the Lorenz curve to make objective classification of VC institutions automatically, which does not require setting arbitrary thresholds and the number of categories. We further reveal distinct investment behaviors across categories, where the top-ranked group enters more industries and investment stages with a better performance. Through network embedding of joint investment relations, we unveil the existence of possible territories of top-ranked VC institutions, and the hidden structure of relations between VC institutions. Full article
(This article belongs to the Special Issue Complex Network Analysis in Econometrics)
Show Figures

Figure 1

22 pages, 432 KiB  
Article
Generalized Species Richness Indices for Diversity
by Zhiyi Zhang
Entropy 2022, 24(10), 1504; https://doi.org/10.3390/e24101504 - 21 Oct 2022
Cited by 2 | Viewed by 1395
Abstract
A generalized notion of species richness is introduced. The generalization embeds the popular index of species richness on the boundary of a family of diversity indices each of which is the number of species in the community after a small proportion of individuals [...] Read more.
A generalized notion of species richness is introduced. The generalization embeds the popular index of species richness on the boundary of a family of diversity indices each of which is the number of species in the community after a small proportion of individuals belonging to the least minorities is trimmed. It is established that the generalized species richness indices satisfy a weak version of the usual axioms for diversity indices, are qualitatively robust against small perturbations in the underlying distribution, and are collectively complete with respect to all information of diversity. In addition to a natural plug-in estimator of the generalized species richness, a bias-adjusted estimator is proposed, and its statistical reliability is gauged via bootstrapping. Finally an ecological example and supportive simulation results are given. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

24 pages, 2837 KiB  
Article
Comparison of Entropy Calculation Methods for Ransomware Encrypted File Identification
by Simon R. Davies, Richard Macfarlane and William J. Buchanan
Entropy 2022, 24(10), 1503; https://doi.org/10.3390/e24101503 - 21 Oct 2022
Cited by 7 | Viewed by 3014
Abstract
Ransomware is a malicious class of software that utilises encryption to implement an attack on system availability. The target’s data remains encrypted and is held captive by the attacker until a ransom demand is met. A common approach used by many crypto-ransomware detection [...] Read more.
Ransomware is a malicious class of software that utilises encryption to implement an attack on system availability. The target’s data remains encrypted and is held captive by the attacker until a ransom demand is met. A common approach used by many crypto-ransomware detection techniques is to monitor file system activity and attempt to identify encrypted files being written to disk, often using a file’s entropy as an indicator of encryption. However, often in the description of these techniques, little or no discussion is made as to why a particular entropy calculation technique is selected or any justification given as to why one technique is selected over the alternatives. The Shannon method of entropy calculation is the most commonly-used technique when it comes to file encryption identification in crypto-ransomware detection techniques. Overall, correctly encrypted data should be indistinguishable from random data, so apart from the standard mathematical entropy calculations such as Chi-Square (χ2), Shannon Entropy and Serial Correlation, the test suites used to validate the output from pseudo-random number generators would also be suited to perform this analysis. The hypothesis being that there is a fundamental difference between different entropy methods and that the best methods may be used to better detect ransomware encrypted files. The paper compares the accuracy of 53 distinct tests in being able to differentiate between encrypted data and other file types. The testing is broken down into two phases, the first phase is used to identify potential candidate tests, and a second phase where these candidates are thoroughly evaluated. To ensure that the tests were sufficiently robust, the NapierOne dataset is used. This dataset contains thousands of examples of the most commonly used file types, as well as examples of files that have been encrypted by crypto-ransomware. During the second phase of testing, 11 candidate entropy calculation techniques were tested against more than 270,000 individual files—resulting in nearly three million separate calculations. The overall accuracy of each of the individual test’s ability to differentiate between files encrypted using crypto-ransomware and other file types is then evaluated and each test is compared using this metric in an attempt to identify the entropy method most suited for encrypted file identification. An investigation was also undertaken to determine if a hybrid approach, where the results of multiple tests are combined, to discover if an improvement in accuracy could be achieved. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 327 KiB  
Article
New Challenges for Classical and Quantum Probability
by Luigi Accardi
Entropy 2022, 24(10), 1502; https://doi.org/10.3390/e24101502 - 21 Oct 2022
Cited by 1 | Viewed by 1355
Abstract
The discovery that any classical random variable with all moments gives rise to a full quantum theory (that in the Gaussian and Poisson cases coincides with the usual one) implies that a quantum–type formalism will enter into practically all applications of classical probability [...] Read more.
The discovery that any classical random variable with all moments gives rise to a full quantum theory (that in the Gaussian and Poisson cases coincides with the usual one) implies that a quantum–type formalism will enter into practically all applications of classical probability and statistics. The new challenge consists in finding the classical interpretation, for different types of classical contexts, of typical quantum notions such as entanglement, normal order, equilibrium states, etc. As an example, every classical symmetric random variable has a canonically associated conjugate momentum. In usual quantum mechanics (associated with Gaussian or Poisson classical random variables), the interpretation of the momentum operator was already clear to Heisenberg. How should we interpret the conjugate momentum operator associated with classical random variables outside the Gauss–Poisson class? The Introduction is intended to place in historical perspective the recent developments that are the main object of the present exposition. Full article
(This article belongs to the Special Issue Quantum Information and Probability: From Foundations to Engineering)
10 pages, 361 KiB  
Article
Redundancy and Synergy of an Entangling Cloner in Continuous-Variable Quantum Communication
by Vladyslav C. Usenko
Entropy 2022, 24(10), 1501; https://doi.org/10.3390/e24101501 - 21 Oct 2022
Viewed by 1261
Abstract
We address minimization of information leakage from continuous-variable quantum channels. It is known, that regime of minimum leakage can be accessible for the modulated signal states with variance equivalent to a shot noise, i.e., vacuum fluctuations, in the case of collective attacks. Here [...] Read more.
We address minimization of information leakage from continuous-variable quantum channels. It is known, that regime of minimum leakage can be accessible for the modulated signal states with variance equivalent to a shot noise, i.e., vacuum fluctuations, in the case of collective attacks. Here we derive the same condition for the individual attacks and analytically study the properties of the mutual information quantities in and out of this regime. We show that in such regime a joint measurement on the modes of a two-mode entangling cloner, being the optimal individual eavesdropping attack in a noisy Gaussian channel, is no more effective that independent measurements on the modes. Varying variance of the signal out of this regime, we observe the nontrivial statistical effects of either redundancy or synergy between the measurements of two modes of the entangling cloner. The result reveals the non-optimality of entangling cloner individual attack for sub-shot-noise modulated signals. Considering the communication between the cloner modes, we show the advantage of knowing the residual noise after its interaction with the cloner and extend the result to a two-cloner scheme. Full article
(This article belongs to the Special Issue Quantum Communication)
Show Figures

Figure 1

21 pages, 2860 KiB  
Article
Deep Matrix Factorization Based on Convolutional Neural Networks for Image Inpainting
by Xiaoxuan Ma, Zhiwen Li and Hengyou Wang
Entropy 2022, 24(10), 1500; https://doi.org/10.3390/e24101500 - 20 Oct 2022
Viewed by 1965
Abstract
In this work, we formulate the image in-painting as a matrix completion problem. Traditional matrix completion methods are generally based on linear models, assuming that the matrix is low rank. When the original matrix is large scale and the observed elements are few, [...] Read more.
In this work, we formulate the image in-painting as a matrix completion problem. Traditional matrix completion methods are generally based on linear models, assuming that the matrix is low rank. When the original matrix is large scale and the observed elements are few, they will easily lead to over-fitting and their performance will also decrease significantly. Recently, researchers have tried to apply deep learning and nonlinear techniques to solve matrix completion. However, most of the existing deep learning-based methods restore each column or row of the matrix independently, which loses the global structure information of the matrix and therefore does not achieve the expected results in the image in-painting. In this paper, we propose a deep matrix factorization completion network (DMFCNet) for image in-painting by combining deep learning and a traditional matrix completion model. The main idea of DMFCNet is to map iterative updates of variables from a traditional matrix completion model into a fixed depth neural network. The potential relationships between observed matrix data are learned in a trainable end-to-end manner, which leads to a high-performance and easy-to-deploy nonlinear solution. Experimental results show that DMFCNet can provide higher matrix completion accuracy than the state-of-the-art matrix completion methods in a shorter running time. Full article
(This article belongs to the Topic Recent Trends in Image Processing and Pattern Recognition)
Show Figures

Figure 1

13 pages, 378 KiB  
Article
Three Efficient All-Erasure Decoding Methods for Blaum–Roth Codes
by Weijie Zhou and Hanxu Hou
Entropy 2022, 24(10), 1499; https://doi.org/10.3390/e24101499 - 20 Oct 2022
Viewed by 1207
Abstract
Blaum–Roth Codes are binary maximum distance separable (MDS) array codes over the binary quotient ring F2[x]/(Mp(x)), where [...] Read more.
Blaum–Roth Codes are binary maximum distance separable (MDS) array codes over the binary quotient ring F2[x]/(Mp(x)), where Mp(x)=1+x++xp1, and p is a prime number. Two existing all-erasure decoding methods for Blaum–Roth codes are the syndrome-based decoding method and the interpolation-based decoding method. In this paper, we propose a modified syndrome-based decoding method and a modified interpolation-based decoding method that have lower decoding complexity than the syndrome-based decoding method and the interpolation-based decoding method, respectively. Moreover, we present a fast decoding method for Blaum–Roth codes based on the LU decomposition of the Vandermonde matrix that has a lower decoding complexity than the two modified decoding methods for most of the parameters. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
17 pages, 1095 KiB  
Article
How the Brain Becomes the Mind: Can Thermodynamics Explain the Emergence and Nature of Emotions?
by Éva Déli, James F. Peters and Zoltán Kisvárday
Entropy 2022, 24(10), 1498; https://doi.org/10.3390/e24101498 - 20 Oct 2022
Cited by 1 | Viewed by 5586
Abstract
The neural systems’ electric activities are fundamental for the phenomenology of consciousness. Sensory perception triggers an information/energy exchange with the environment, but the brain’s recurrent activations maintain a resting state with constant parameters. Therefore, perception forms a closed thermodynamic cycle. In physics, the [...] Read more.
The neural systems’ electric activities are fundamental for the phenomenology of consciousness. Sensory perception triggers an information/energy exchange with the environment, but the brain’s recurrent activations maintain a resting state with constant parameters. Therefore, perception forms a closed thermodynamic cycle. In physics, the Carnot engine is an ideal thermodynamic cycle that converts heat from a hot reservoir into work, or inversely, requires work to transfer heat from a low- to a high-temperature reservoir (the reversed Carnot cycle). We analyze the high entropy brain by the endothermic reversed Carnot cycle. Its irreversible activations provide temporal directionality for future orientation. A flexible transfer between neural states inspires openness and creativity. In contrast, the low entropy resting state parallels reversible activations, which impose past focus via repetitive thinking, remorse, and regret. The exothermic Carnot cycle degrades mental energy. Therefore, the brain’s energy/information balance formulates motivation, sensed as position or negative emotions. Our work provides an analytical perspective of positive and negative emotions and spontaneous behavior from the free energy principle. Furthermore, electrical activities, thoughts, and beliefs lend themselves to a temporal organization, an orthogonal condition to physical systems. Here, we suggest that an experimental validation of the thermodynamic origin of emotions might inspire better treatment options for mental diseases. Full article
(This article belongs to the Special Issue Brain Connectivity Complex Systems)
Show Figures

Graphical abstract

10 pages, 286 KiB  
Article
Behavioral Capital Theory via Canonical Quantization
by Raymond J. Hawkins and Joseph L. D’Anna
Entropy 2022, 24(10), 1497; https://doi.org/10.3390/e24101497 - 20 Oct 2022
Cited by 1 | Viewed by 1390
Abstract
We show how a behavioral form of capital theory can be derived using canonical quantization. In particular, we introduce quantum cognition into capital theory by applying Dirac’s canonical quantization approach to Weitzman’s Hamiltonian formulation of capital theory, the justification for the use of [...] Read more.
We show how a behavioral form of capital theory can be derived using canonical quantization. In particular, we introduce quantum cognition into capital theory by applying Dirac’s canonical quantization approach to Weitzman’s Hamiltonian formulation of capital theory, the justification for the use of quantum cognition being the incompatibility of questions encountered in the investment decision-making process. We illustrate the utility of this approach by deriving the capital-investment commutator for a canonical dynamic investment problem. Full article
(This article belongs to the Special Issue Quantum Models of Cognition and Decision-Making II)
Show Figures

Figure 1

18 pages, 1029 KiB  
Article
Multi-Task Learning and Improved TextRank for Knowledge Graph Completion
by Hao Tian, Xiaoxiong Zhang, Yuhan Wang and Daojian Zeng
Entropy 2022, 24(10), 1495; https://doi.org/10.3390/e24101495 - 20 Oct 2022
Cited by 3 | Viewed by 1903
Abstract
Knowledge graph completion is an important technology for supplementing knowledge graphs and improving data quality. However, the existing knowledge graph completion methods ignore the features of triple relations, and the introduced entity description texts are long and redundant. To address these problems, this [...] Read more.
Knowledge graph completion is an important technology for supplementing knowledge graphs and improving data quality. However, the existing knowledge graph completion methods ignore the features of triple relations, and the introduced entity description texts are long and redundant. To address these problems, this study proposes a multi-task learning and improved TextRank for knowledge graph completion (MIT-KGC) model. The key contexts are first extracted from redundant entity descriptions using the improved TextRank algorithm. Then, a lite bidirectional encoder representations from transformers (ALBERT) is used as the text encoder to reduce the parameters of the model. Subsequently, the multi-task learning method is utilized to fine-tune the model by effectively integrating the entity and relation features. Based on the datasets of WN18RR, FB15k-237, and DBpedia50k, experiments were conducted with the proposed model and the results showed that, compared with traditional methods, the mean rank (MR), top 10 hit ratio (Hit@10), and top three hit ratio (Hit@3) were enhanced by 38, 1.3%, and 1.9%, respectively, on WN18RR. Additionally, the MR and Hit@10 were increased by 23 and 0.7%, respectively, on FB15k-237. The model also improved the Hit@3 and the top one hit ratio (Hit@1) by 3.1% and 1.5% on the dataset DBpedia50k, respectively, verifying the validity of the model. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

18 pages, 1255 KiB  
Article
LMI-Based Delayed Output Feedback Controller Design for a Class of Fractional-Order Neutral-Type Delay Systems Using Guaranteed Cost Control Approach
by Zahra Sadat Aghayan, Alireza Alfi and António M. Lopes
Entropy 2022, 24(10), 1496; https://doi.org/10.3390/e24101496 - 19 Oct 2022
Cited by 4 | Viewed by 1514
Abstract
In this research work, we deal with the stabilization of uncertain fractional-order neutral systems with delayed input. To tackle this problem, the guaranteed cost control method is considered. The purpose is to design a proportional–differential output feedback controller to obtain a satisfactory performance. [...] Read more.
In this research work, we deal with the stabilization of uncertain fractional-order neutral systems with delayed input. To tackle this problem, the guaranteed cost control method is considered. The purpose is to design a proportional–differential output feedback controller to obtain a satisfactory performance. The stability of the overall system is described in terms of matrix inequalities, and the corresponding analysis is performed in the perspective of Lyapunov’s theory. Two application examples verify the analytic findings. Full article
Show Figures

Figure 1

30 pages, 8789 KiB  
Article
Multi-Attribute Decision Making with Einstein Aggregation Operators in Complex Q-Rung Orthopair Fuzzy Hypersoft Environments
by Changyan Ying, Wushour Slamu and Changtian Ying
Entropy 2022, 24(10), 1494; https://doi.org/10.3390/e24101494 - 19 Oct 2022
Cited by 4 | Viewed by 1348
Abstract
The purpose of our research is to extend the formal representation of the human mind to the concept of the complex q-rung orthopair fuzzy hypersoft set (Cq-ROFHSS), a more general hybrid theory. A great deal of imprecision and ambiguity can be captured by [...] Read more.
The purpose of our research is to extend the formal representation of the human mind to the concept of the complex q-rung orthopair fuzzy hypersoft set (Cq-ROFHSS), a more general hybrid theory. A great deal of imprecision and ambiguity can be captured by it, which is common in human interpretations. It provides a multiparameterized mathematical tool for the order-based fuzzy modeling of contradictory two-dimensional data, which provides a more effective way of expressing time-period problems as well as two-dimensional information within a dataset. Thus, the proposed theory combines the parametric structure of complex q-rung orthopair fuzzy sets and hypersoft sets. Through the use of the parameter q, the framework captures information beyond the limited space of complex intuitionistic fuzzy hypersoft sets and complex Pythagorean fuzzy hypersoft sets. By establishing basic set-theoretic operations, we demonstrate some of the fundamental properties of the model. To expand the mathematical toolbox in this field, Einstein and other basic operations will be introduced to complex q-rung orthopair fuzzy hypersoft values. The relationship between it and existing methods demonstrates its exceptional flexibility. The Einstein aggregation operator, score function, and accuracy function are used to develop two multi-attribute decision-making algorithms, which prioritize based on the score function and accuracy function to ideal schemes under Cq-ROFHSS, which captures subtle differences in periodically inconsistent data sets. The feasibility of the approach will be demonstrated through a case study of selected distributed control systems. The rationality of these strategies has been confirmed by comparison with mainstream technologies. Additionally, we demonstrate that these results are compatible with explicit histograms and Spearman correlation analyses. The strengths of each approach are analyzed in a comparative manner. The proposed model is then examined and compared with other theories, demonstrating its strength, validity, and flexibility. Full article
Show Figures

Figure 1

40 pages, 813 KiB  
Article
A Hierarchy of Probability, Fluid and Generalized Densities for the Eulerian Velocivolumetric Description of Fluid Flow, for New Families of Conservation Laws
by Robert K. Niven
Entropy 2022, 24(10), 1493; https://doi.org/10.3390/e24101493 - 19 Oct 2022
Cited by 2 | Viewed by 1534
Abstract
The Reynolds transport theorem occupies a central place in continuum mechanics, providing a generalized integral conservation equation for the transport of any conserved quantity within a fluid or material volume, which can be connected to its corresponding differential equation. Recently, a more generalized [...] Read more.
The Reynolds transport theorem occupies a central place in continuum mechanics, providing a generalized integral conservation equation for the transport of any conserved quantity within a fluid or material volume, which can be connected to its corresponding differential equation. Recently, a more generalized framework was presented for this theorem, enabling parametric transformations between positions on a manifold or in any generalized coordinate space, exploiting the underlying continuous multivariate (Lie) symmetries of a vector or tensor field associated with a conserved quantity. We explore the implications of this framework for fluid flow systems, based on an Eulerian velocivolumetric (position-velocity) description of fluid flow. The analysis invokes a hierarchy of five probability density functions, which by convolution are used to define five fluid densities and generalized densities relevant to this description. We derive 11 formulations of the generalized Reynolds transport theorem for different choices of the coordinate space, parameter space and density, only the first of which is commonly known. These are used to generate a table of integral and differential conservation laws applicable to each formulation, for eight important conserved quantities (fluid mass, species mass, linear momentum, angular momentum, energy, charge, entropy and probability). The findings substantially expand the set of conservation laws for the analysis of fluid flow and dynamical systems. Full article
Show Figures

Graphical abstract

16 pages, 4328 KiB  
Article
The Interpretation of Graphical Information in Word Processing
by Mária Csernoch, János Máth and Tímea Nagy
Entropy 2022, 24(10), 1492; https://doi.org/10.3390/e24101492 - 19 Oct 2022
Cited by 2 | Viewed by 1332
Abstract
Word processing is one of the most popular digital activities. Despite its popularity, it is haunted by false assumptions, misconceptions, and ineffective and inefficient practices leading to erroneous digital text-based documents. The focus of the present paper is automated numbering and distinguishing between [...] Read more.
Word processing is one of the most popular digital activities. Despite its popularity, it is haunted by false assumptions, misconceptions, and ineffective and inefficient practices leading to erroneous digital text-based documents. The focus of the present paper is automated numbering and distinguishing between manual and automated numbering. In general, one bit of information on the GUI—the position of the cursor—is enough to tell whether a numbering is manual or automated. To decide how much information must be put on the channel—the teaching–learning process—in order to reach end-users, we designed and implemented a method that includes the analysis of teaching, learning, tutorial, and testing sources, the collection and analysis of Word documents shared on the internet or in closed groups, the testing of grade 7–10 students’ knowledge in automated numbering, and calculating the entropy of automated numbering. The combination of the test results and the semantics of the automated numbering was used to measure the entropy of automated numbering. It was found that to transfer one bit of information on the GUI, at least three bits of information must be transferred during the teaching–learning process. Furthermore, it was revealed that the information connected to numbering is not the pure use of tools, but the semantics of this feature put into a real-world context. Full article
Show Figures

Figure 1

18 pages, 4208 KiB  
Article
Four-Objective Optimization of an Irreversible Stirling Heat Engine with Linear Phenomenological Heat-Transfer Law
by Haoran Xu, Lingen Chen, Yanlin Ge and Huijun Feng
Entropy 2022, 24(10), 1491; https://doi.org/10.3390/e24101491 - 19 Oct 2022
Cited by 4 | Viewed by 1326
Abstract
This paper combines the mechanical efficiency theory and finite time thermodynamic theory to perform optimization on an irreversible Stirling heat-engine cycle, in which heat transfer between working fluid and heat reservoir obeys linear phenomenological heat-transfer law. There are mechanical losses, as well as [...] Read more.
This paper combines the mechanical efficiency theory and finite time thermodynamic theory to perform optimization on an irreversible Stirling heat-engine cycle, in which heat transfer between working fluid and heat reservoir obeys linear phenomenological heat-transfer law. There are mechanical losses, as well as heat leakage, thermal resistance, and regeneration loss. We treated temperature ratio x of working fluid and volume compression ratio λ as optimization variables, and used the NSGA-II algorithm to carry out multi-objective optimization on four optimization objectives, namely, dimensionless shaft power output P¯s, braking thermal efficiency ηs, dimensionless efficient power E¯p and dimensionless power density P¯d. The optimal solutions of four-, three-, two-, and single-objective optimizations are reached by selecting the minimum deviation indexes D with the three decision-making strategies, namely, TOPSIS, LINMAP, and Shannon Entropy. The optimization results show that the D reached by TOPSIS and LINMAP strategies are both 0.1683 and better than the Shannon Entropy strategy for four-objective optimization, while the Ds reached for single-objective optimizations at maximum P¯s, ηs, E¯p, and P¯d conditions are 0.1978, 0.8624, 0.3319, and 0.3032, which are all bigger than 0.1683. This indicates that multi-objective optimization results are better when choosing appropriate decision-making strategies. Full article
Show Figures

Figure 1

15 pages, 529 KiB  
Article
Audio Augmentation for Non-Native Children’s Speech Recognition through Discriminative Learning
by Kodali Radha and Mohan Bansal
Entropy 2022, 24(10), 1490; https://doi.org/10.3390/e24101490 - 19 Oct 2022
Cited by 10 | Viewed by 1999
Abstract
Automatic speech recognition (ASR) in children is a rapidly evolving field, as children become more accustomed to interacting with virtual assistants, such as Amazon Echo, Cortana, and other smart speakers, and it has advanced the human–computer interaction in recent generations. Furthermore, non-native children [...] Read more.
Automatic speech recognition (ASR) in children is a rapidly evolving field, as children become more accustomed to interacting with virtual assistants, such as Amazon Echo, Cortana, and other smart speakers, and it has advanced the human–computer interaction in recent generations. Furthermore, non-native children are observed to exhibit a diverse range of reading errors during second language (L2) acquisition, such as lexical disfluency, hesitations, intra-word switching, and word repetitions, which are not yet addressed, resulting in ASR’s struggle to recognize non-native children’s speech. The main objective of this study is to develop a non-native children’s speech recognition system on top of feature-space discriminative models, such as feature-space maximum mutual information (fMMI) and boosted feature-space maximum mutual information (fbMMI). Harnessing the collaborative power of speed perturbation-based data augmentation on the original children’s speech corpora yields an effective performance. The corpus focuses on different speaking styles of children, together with read speech and spontaneous speech, in order to investigate the impact of non-native children’s L2 speaking proficiency on speech recognition systems. The experiments revealed that feature-space MMI models with steadily increasing speed perturbation factors outperform traditional ASR baseline models. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches in Speech Processing and Recognition)
Show Figures

Figure 1

15 pages, 3177 KiB  
Article
Template Attack of LWE/LWR-Based Schemes with Cyclic Message Rotation
by Yajing Chang, Yingjian Yan, Chunsheng Zhu and Pengfei Guo
Entropy 2022, 24(10), 1489; https://doi.org/10.3390/e24101489 - 18 Oct 2022
Cited by 4 | Viewed by 1740
Abstract
The side-channel security of lattice-based post-quantum cryptography has gained extensive attention since the standardization of post-quantum cryptography. Based on the leakage mechanism in the decapsulation stage of LWE/LWR-based post-quantum cryptography, a message recovery method, with templates and cyclic message rotation targeting the message [...] Read more.
The side-channel security of lattice-based post-quantum cryptography has gained extensive attention since the standardization of post-quantum cryptography. Based on the leakage mechanism in the decapsulation stage of LWE/LWR-based post-quantum cryptography, a message recovery method, with templates and cyclic message rotation targeting the message decoding operation, was proposed. The templates were constructed for the intermediate state based on the Hamming weight model and cyclic message rotation was used to construct special ciphertexts. Using the power leakage during operation, secret messages in the LWE/LWR-based schemes were recovered. The proposed method was verified on CRYSTAL-Kyber. The experimental results demonstrated that this method could successfully recover the secret messages used in the encapsulation stage, thereby recovering the shared key. Compared with existing methods, the power traces required for templates and attack were both reduced. The success rate was significantly increased under the low SNR, indicating a better performance with lower recovery cost. The message recovery success rate could reach 99.6% with sufficient SNR. Full article
(This article belongs to the Special Issue An Information-Theoretic Approach to Side-Channel Analysis)
Show Figures

Figure 1

9 pages, 420 KiB  
Article
The QQUIC Transport Protocol: Quantum-Assisted UDP Internet Connections
by Peng Yan and Nengkun Yu
Entropy 2022, 24(10), 1488; https://doi.org/10.3390/e24101488 - 18 Oct 2022
Cited by 2 | Viewed by 1662
Abstract
Quantum key distribution, initialized in 1984, is a commercialized secure communication method that enables two parties to produce a shared random secret key using quantum mechanics. We propose a QQUIC (Quantum-assisted Quick UDP Internet Connections) transport protocol, which modifies the well-known QUIC transport [...] Read more.
Quantum key distribution, initialized in 1984, is a commercialized secure communication method that enables two parties to produce a shared random secret key using quantum mechanics. We propose a QQUIC (Quantum-assisted Quick UDP Internet Connections) transport protocol, which modifies the well-known QUIC transport protocol by employing quantum key distribution instead of the original classical algorithms in the key exchange stage. Due to the provable security of quantum key distribution, the security of the QQUIC key does not depend on computational assumptions. It is possible that, surprisingly, QQUIC can reduce network latency in some circumstances even compared with QUIC. To achieve this, the attached quantum connections are used as the dedicated lines for key generation. Full article
(This article belongs to the Special Issue New Advances in Quantum Communication and Networks)
Show Figures

Figure 1

29 pages, 1529 KiB  
Article
Predicting Bitcoin (BTC) Price in the Context of Economic Theories: A Machine Learning Approach
by Sahar Erfanian, Yewang Zhou, Amar Razzaq, Azhar Abbas, Asif Ali Safeer and Teng Li
Entropy 2022, 24(10), 1487; https://doi.org/10.3390/e24101487 - 18 Oct 2022
Cited by 7 | Viewed by 3618
Abstract
Bitcoin (BTC)—the first cryptocurrency—is a decentralized network used to make private, anonymous, peer-to-peer transactions worldwide, yet there are numerous issues in its pricing due to its arbitrary nature, thus limiting its use due to skepticism among businesses and households. However, there is a [...] Read more.
Bitcoin (BTC)—the first cryptocurrency—is a decentralized network used to make private, anonymous, peer-to-peer transactions worldwide, yet there are numerous issues in its pricing due to its arbitrary nature, thus limiting its use due to skepticism among businesses and households. However, there is a vast scope of machine learning approaches to predict future prices precisely. One of the major problems with previous research on BTC price predictions is that they are primarily empirical research lacking sufficient analytical support to back up the claims. Therefore, this study aims to solve the BTC price prediction problem in the context of both macroeconomic and microeconomic theories by applying new machine learning methods. Previous work, however, shows mixed evidence of the superiority of machine learning over statistical analysis and vice versa, so more research is needed. This paper applies comparative approaches, including ordinary least squares (OLS), Ensemble learning, support vector regression (SVR), and multilayer perceptron (MLP), to investigate whether the macroeconomic, microeconomic, technical, and blockchain indicators based on economic theories predict the BTC price or not. The findings point out that some technical indicators are significant short-run BTC price predictors, thus confirming the validity of technical analysis. Moreover, macroeconomic and blockchain indicators are found to be significant long-term predictors, implying that supply, demand, and cost-based pricing theories are the underlying theories of BTC price prediction. Likewise, SVR is found to be superior to other machine learning and traditional models. This research’s innovation is looking at BTC price prediction through theoretical aspects. The overall findings show that SVR is superior to other machine learning models and traditional models. This paper has several contributions. It can contribute to international finance to be used as a reference for setting asset pricing and improved investment decision-making. It also contributes to the economics of BTC price prediction by introducing its theoretical background. Moreover, as the authors still doubt whether machine learning can beat the traditional methods in BTC price prediction, this research contributes to machine learning configuration and helping developers use it as a benchmark. Full article
(This article belongs to the Special Issue Signatures of Maturity in Cryptocurrency Market)
Show Figures

Figure 1

19 pages, 4168 KiB  
Article
A Hyper-Chaotically Encrypted Robust Digital Image Watermarking Method with Large Capacity Using Compress Sensing on a Hybrid Domain
by Zhen Yang, Qingwei Sun, Yunliang Qi, Shouliang Li and Fengyuan Ren
Entropy 2022, 24(10), 1486; https://doi.org/10.3390/e24101486 - 18 Oct 2022
Cited by 6 | Viewed by 2029
Abstract
The digital watermarking technique is a quite promising technique for both image copyright protection and secure transmission. However, many existing techniques are not as one might have expected for robustness and capacity simultaneously. In this paper, we propose a robust semi-blind image watermarking [...] Read more.
The digital watermarking technique is a quite promising technique for both image copyright protection and secure transmission. However, many existing techniques are not as one might have expected for robustness and capacity simultaneously. In this paper, we propose a robust semi-blind image watermarking scheme with a high capacity. Firstly, we perform a discrete wavelet transformation (DWT) transformation on the carrier image. Then, the watermark images are compressed via a compressive sampling technique for saving storage space. Thirdly, a Combination of One and Two-Dimensional Chaotic Map based on the Tent and Logistic map (TL-COTDCM) is used to scramble the compressed watermark image with high security and dramatically reduce the false positive problem (FPP). Finally, a singular value decomposition (SVD) component is used to embed into the decomposed carrier image to finish the embedding process. With this scheme, eight 256×256 grayscale watermark images are perfectly embedded into a 512×512 carrier image, the capacity of which is eight times over that of the existing watermark techniques on average. The scheme has been tested through several common attacks on high strength, and the experiment results show the superiority of our method via the two most used evaluation indicators, normalized correlation coefficient (NCC) values and the peak signal-to-noise ratio (PSNR). Our method outperforms the state-of-the-art in the aspects of robustness, security, and capacity of digital watermarking, which exhibits great potential in multimedia application in the immediate future. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

30 pages, 491 KiB  
Review
Flows of Substances in Networks and Network Channels: Selected Results and Applications
by Zlatinka I. Dimitrova
Entropy 2022, 24(10), 1485; https://doi.org/10.3390/e24101485 - 18 Oct 2022
Cited by 3 | Viewed by 2070
Abstract
This review paper is devoted to a brief overview of results and models concerning flows in networks and channels of networks. First of all, we conduct a survey of the literature in several areas of research connected to these flows. Then, we mention [...] Read more.
This review paper is devoted to a brief overview of results and models concerning flows in networks and channels of networks. First of all, we conduct a survey of the literature in several areas of research connected to these flows. Then, we mention certain basic mathematical models of flows in networks that are based on differential equations. We give special attention to several models for flows of substances in channels of networks. For stationary cases of these flows, we present probability distributions connected to the substance in the nodes of the channel for two basic models: the model of a channel with many arms modeled by differential equations and the model of a simple channel with flows of substances modeled by difference equations. The probability distributions obtained contain as specific cases any probability distribution of a discrete random variable that takes values of 0,1,. We also mention applications of the considered models, such as applications for modeling migration flows. Special attention is given to the connection of the theory of stationary flows in channels of networks and the theory of the growth of random networks. Full article
Show Figures

Figure 1

12 pages, 1503 KiB  
Article
Modelling Spirals of Silence and Echo Chambers by Learning from the Feedback of Others
by Sven Banisch, Felix Gaisbauer and Eckehard Olbrich
Entropy 2022, 24(10), 1484; https://doi.org/10.3390/e24101484 - 18 Oct 2022
Cited by 4 | Viewed by 2009
Abstract
What are the mechanisms by which groups with certain opinions gain public voice and force others holding a different view into silence? Furthermore, how does social media play into this? Drawing on neuroscientific insights into the processing of social feedback, we develop a [...] Read more.
What are the mechanisms by which groups with certain opinions gain public voice and force others holding a different view into silence? Furthermore, how does social media play into this? Drawing on neuroscientific insights into the processing of social feedback, we develop a theoretical model that allows us to address these questions. In repeated interactions, individuals learn whether their opinion meets public approval and refrain from expressing their standpoint if it is socially sanctioned. In a social network sorted around opinions, an agent forms a distorted impression of public opinion enforced by the communicative activity of the different camps. Even strong majorities can be forced into silence if a minority acts as a cohesive whole. On the other hand, the strong social organisation around opinions enabled by digital platforms favours collective regimes in which opposing voices are expressed and compete for primacy in public. This paper highlights the role that the basic mechanisms of social information processing play in massive computer-mediated interactions on opinions. Full article
(This article belongs to the Special Issue Statistical Physics of Opinion Formation and Social Phenomena)
Show Figures

Figure 1

18 pages, 295 KiB  
Article
Probabilistic Pairwise Model Comparisons Based on Bootstrap Estimators of the Kullback–Leibler Discrepancy
by Andres Dajles and Joseph Cavanaugh
Entropy 2022, 24(10), 1483; https://doi.org/10.3390/e24101483 - 18 Oct 2022
Cited by 1 | Viewed by 1194
Abstract
When choosing between two candidate models, classical hypothesis testing presents two main limitations: first, the models being tested have to be nested, and second, one of the candidate models must subsume the structure of the true data-generating model. Discrepancy measures have been used [...] Read more.
When choosing between two candidate models, classical hypothesis testing presents two main limitations: first, the models being tested have to be nested, and second, one of the candidate models must subsume the structure of the true data-generating model. Discrepancy measures have been used as an alternative method to select models without the need to rely upon the aforementioned assumptions. In this paper, we utilize a bootstrap approximation of the Kullback–Leibler discrepancy (BD) to estimate the probability that the fitted null model is closer to the underlying generating model than the fitted alternative model. We propose correcting for the bias of the BD estimator either by adding a bootstrap-based correction or by adding the number of parameters in the candidate model. We exemplify the effect of these corrections on the estimator of the discrepancy probability and explore their behavior in different model comparison settings. Full article
(This article belongs to the Special Issue Information and Divergence Measures)
15 pages, 2745 KiB  
Article
Simplicial Persistence of Financial Markets: Filtering, Generative Processes and Structural Risk
by Jeremy Turiel, Paolo Barucca and Tomaso Aste
Entropy 2022, 24(10), 1482; https://doi.org/10.3390/e24101482 - 18 Oct 2022
Viewed by 1527
Abstract
We introduce simplicial persistence, a measure of time evolution of motifs in networks obtained from correlation filtering. We observe long memory in the evolution of structures, with a two power law decay regimes in the number of persistent simplicial complexes. Null models of [...] Read more.
We introduce simplicial persistence, a measure of time evolution of motifs in networks obtained from correlation filtering. We observe long memory in the evolution of structures, with a two power law decay regimes in the number of persistent simplicial complexes. Null models of the underlying time series are tested to investigate properties of the generative process and its evolutional constraints. Networks are generated with both a topological embedding network filtering technique called TMFG and by thresholding, showing that the TMFG method identifies high order structures throughout the market sample, where thresholding methods fail. The decay exponents of these long memory processes are used to characterise financial markets based on their efficiency and liquidity. We find that more liquid markets tend to have a slower persistence decay. This appears to be in contrast with the common understanding that efficient markets are more random. We argue that they are indeed less predictable for what concerns the dynamics of each single variable but they are more predictable for what concerns the collective evolution of the variables. This could imply higher fragility to systemic shocks. Full article
(This article belongs to the Special Issue Complex Network Analysis in Econometrics)
Show Figures

Figure 1

9 pages, 6364 KiB  
Article
Status Forecasting Based on the Baseline Information Using Logistic Regression
by Xin Zhao and Xiaokai Nie
Entropy 2022, 24(10), 1481; https://doi.org/10.3390/e24101481 - 17 Oct 2022
Cited by 1 | Viewed by 1605
Abstract
In the status forecasting problem, classification models such as logistic regression with input variables such as physiological, diagnostic, and treatment variables are typical ways of modeling. However, the parameter value and model performance differ among individuals with different baseline information. To cope with [...] Read more.
In the status forecasting problem, classification models such as logistic regression with input variables such as physiological, diagnostic, and treatment variables are typical ways of modeling. However, the parameter value and model performance differ among individuals with different baseline information. To cope with these difficulties, a subgroup analysis is conducted, in which models’ ANOVA and rpart are proposed to explore the influence of baseline information on the parameters and model performance. The results show that the logistic regression model achieves satisfactory performance, which is generally higher than 0.95 in AUC and around 0.9 in F1 and balanced accuracy. The subgroup analysis presents the prior parameter values for monitoring variables including SpO2, milrinone, non-opioid analgesics and dobutamine. The proposed method can be used to explore variables that are and are not medically related to the baseline variables. Full article
Show Figures

Figure 1

19 pages, 6625 KiB  
Article
Fault Diagnosis Method Based on AUPLMD and RTSMWPE for a Reciprocating Compressor Valve
by Meiping Song, Jindong Wang, Haiyang Zhao and Xulei Wang
Entropy 2022, 24(10), 1480; https://doi.org/10.3390/e24101480 - 17 Oct 2022
Cited by 1 | Viewed by 1465
Abstract
In order to effectively extract the key feature information hidden in the original vibration signal, this paper proposes a fault feature extraction method combining adaptive uniform phase local mean decomposition (AUPLMD) and refined time-shift multiscale weighted permutation entropy (RTSMWPE). The proposed method focuses [...] Read more.
In order to effectively extract the key feature information hidden in the original vibration signal, this paper proposes a fault feature extraction method combining adaptive uniform phase local mean decomposition (AUPLMD) and refined time-shift multiscale weighted permutation entropy (RTSMWPE). The proposed method focuses on two aspects: solving the serious modal aliasing problem of local mean decomposition (LMD) and the dependence of permutation entropy on the length of the original time series. First, by adding a sine wave with a uniform phase as a masking signal, adaptively selecting the amplitude of the added sine wave, the optimal decomposition result is screened by the orthogonality and the signal is reconstructed based on the kurtosis value to remove the signal noise. Secondly, in the RTSMWPE method, the fault feature extraction is realized by considering the signal amplitude information and replacing the traditional coarse-grained multi-scale method with a time-shifted multi-scale method. Finally, the proposed method is applied to the analysis of the experimental data of the reciprocating compressor valve; the analysis results demonstrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

17 pages, 4679 KiB  
Article
An Entropy-Based Combined Behavior Model for Crowd Evacuation
by Xiaowei Chen and Jian Wang
Entropy 2022, 24(10), 1479; https://doi.org/10.3390/e24101479 - 17 Oct 2022
Viewed by 1558
Abstract
Crowd evacuation has gained increasing attention due to its importance in the day-to-day management of public areas. During an emergency evacuation, there are a variety of factors that need to be considered when designing a practical evacuation model. For example, relatives tend to [...] Read more.
Crowd evacuation has gained increasing attention due to its importance in the day-to-day management of public areas. During an emergency evacuation, there are a variety of factors that need to be considered when designing a practical evacuation model. For example, relatives tend to move together or look for each other. These behaviors undoubtedly aggravate the chaos degree of evacuating crowds and make evacuations hard to model. In this paper, we propose an entropy-based combined behavior model to better analyze the influence of these behaviors on the evacuation process. Specifically, we utilize the Boltzmann entropy to quantitatively denote the degree of chaos in the crowd. The evacuation behavior of heterogeneous people is simulated through a series of behavior rules. Moreover, we devise a velocity adjustment method to ensure the evacuees follow a more orderly direction. Extensive simulation results demonstrate the effectiveness of the proposed evacuation model and provide useful insights into the design of practical evacuation strategies. Full article
Show Figures

Figure 1

18 pages, 381 KiB  
Article
An Overview on Irreversible Port-Hamiltonian Systems
by Hector Ramirez and Yann Le Gorrec
Entropy 2022, 24(10), 1478; https://doi.org/10.3390/e24101478 - 17 Oct 2022
Cited by 4 | Viewed by 1752
Abstract
A comprehensive overview of the irreversible port-Hamiltonian system’s formulation for finite and infinite dimensional systems defined on 1D spatial domains is provided in a unified manner. The irreversible port-Hamiltonian system formulation shows the extension of classical port-Hamiltonian system formulations to cope with irreversible [...] Read more.
A comprehensive overview of the irreversible port-Hamiltonian system’s formulation for finite and infinite dimensional systems defined on 1D spatial domains is provided in a unified manner. The irreversible port-Hamiltonian system formulation shows the extension of classical port-Hamiltonian system formulations to cope with irreversible thermodynamic systems for finite and infinite dimensional systems. This is achieved by including, in an explicit manner, the coupling between irreversible mechanical and thermal phenomena with the thermal domain as an energy-preserving and entropy-increasing operator. Similarly to Hamiltonian systems, this operator is skew-symmetric, guaranteeing energy conservation. To distinguish from Hamiltonian systems, the operator depends on co-state variables and is, hence, a nonlinear-function in the gradient of the total energy. This is what allows encoding the second law as a structural property of irreversible port-Hamiltonian systems. The formalism encompasses coupled thermo-mechanical systems and purely reversible or conservative systems as a particular case. This appears clearly when splitting the state space such that the entropy coordinate is separated from other state variables. Several examples have been used to illustrate the formalism, both for finite and infinite dimensional systems, and a discussion on ongoing and future studies is provided. Full article
(This article belongs to the Special Issue Geometric Structure of Thermodynamics: Theory and Applications)
Show Figures

Figure 1

14 pages, 1854 KiB  
Article
Decoupled Early Time Series Classification Using Varied-Length Feature Augmentation and Gradient Projection Technique
by Huiling Chen, Ye Zhang, Aosheng Tian, Yi Hou, Chao Ma and Shilin Zhou
Entropy 2022, 24(10), 1477; https://doi.org/10.3390/e24101477 - 17 Oct 2022
Viewed by 1568
Abstract
Early time series classification (ETSC) is crucial for real-world time-sensitive applications. This task aims to classify time series data with least timestamps at the desired accuracy. Early methods used fixed-length time series to train the deep models, and then quit the classification process [...] Read more.
Early time series classification (ETSC) is crucial for real-world time-sensitive applications. This task aims to classify time series data with least timestamps at the desired accuracy. Early methods used fixed-length time series to train the deep models, and then quit the classification process by setting specific exiting rules. However, these methods may not adapt to the length variation of flow data in ETSC. Recent advances have proposed end-to-end frameworks, which leveraged the Recurrent Neural Networks to handle the varied-length problems, and the exiting subnets for early quitting. Unfortunately, the conflict between the classification and early exiting objectives is not fully considered. To handle these problems, we decouple the ETSC task into the varied-length TSC task and the early exiting task. First, to enhance the adaptive capacity of classification subnets to the data length variation, a feature augmentation module based on random length truncation is proposed. Then, to handle the conflict between classification and early exiting, the gradients of these two tasks are projected into a unified direction. Experimental results on 12 public datasets demonstrate the promising performance of our proposed method. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

16 pages, 910 KiB  
Article
Modelling Worldviews as Stable Metabolisms
by Tomas Veloz and Pedro Maldonado
Entropy 2022, 24(10), 1476; https://doi.org/10.3390/e24101476 - 17 Oct 2022
Cited by 1 | Viewed by 1462
Abstract
The emergence and evolution of worldviews is a complex phenomenon that requires strong and rigorous scientific attention in our hyperconnected world. On the one hand, cognitive theories have proposed reasonable frameworks but have not reached general modeling frameworks where predictions can be tested. [...] Read more.
The emergence and evolution of worldviews is a complex phenomenon that requires strong and rigorous scientific attention in our hyperconnected world. On the one hand, cognitive theories have proposed reasonable frameworks but have not reached general modeling frameworks where predictions can be tested. On the other hand, machine-learning-based applications perform extremely well at predicting outcomes of worldviews, but they rely on a set of optimized weights in a neural network that does not comply to a well-founded cognitive framework. In this article, we propose a formal approach used to investigate the establishment of and change in worldviews by recalling that the realm of ideas, where opinions, perspectives and worldviews are shaped, resemble, in many ways, a metabolic system. We propose a general modelization of worldviews based on reaction networks, and a specific starting model based on species representing belief attitudes and species representing belief change triggers. These two kinds of species combine and modify their structures through the reactions. We show that chemical organization theory combined with dynamical simulations can illustrate various interesting features of how worldviews emerge, are maintained and change. In particular, worldviews correspond to chemical organizations, meaning closed and self-producing structures, which are generally maintained by feedback loops occurring within the beliefs and triggers in the organization. We also show how, by inducing the external input of belief change triggers, it is possible to change from one worldview to another, in an irreversible way. We illustrate our approach with a simple example reflecting the formation of an opinion and a belief attitude about a theme, and, next, show a more complex scenario containing opinions and belief attitudes about two possible themes. Full article
(This article belongs to the Special Issue Complexity and Evolution)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop