entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
7 pages, 483 KiB  
Review
Advances in Atomtronics
by Ron A. Pepino
Entropy 2021, 23(5), 534; https://doi.org/10.3390/e23050534 - 27 Apr 2021
Cited by 5 | Viewed by 1935
Abstract
Atomtronics is a relatively new subfield of atomic physics that aims to realize the device behavior of electronic components in ultracold atom-optical systems. The fact that these systems are coherent makes them particularly interesting since, in addition to current, one can impart quantum [...] Read more.
Atomtronics is a relatively new subfield of atomic physics that aims to realize the device behavior of electronic components in ultracold atom-optical systems. The fact that these systems are coherent makes them particularly interesting since, in addition to current, one can impart quantum states onto the current carriers themselves or perhaps perform quantum computational operations on them. After reviewing the fundamental ideas of this subfield, we report on the theoretical and experimental progress made towards developing externally-driven and closed loop devices. The functionality and potential applications for these atom analogs to electronic and spintronic systems is also discussed. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

19 pages, 1770 KiB  
Article
Unifying Large- and Small-Scale Theories of Coordination
by J. A. Scott Kelso
Entropy 2021, 23(5), 537; https://doi.org/10.3390/e23050537 - 27 Apr 2021
Cited by 29 | Viewed by 6442
Abstract
Coordination is a ubiquitous feature of all living things. It occurs by virtue of informational coupling among component parts and processes and can be quite specific (as when cells in the brain resonate to signals in the environment) or nonspecific (as when simple [...] Read more.
Coordination is a ubiquitous feature of all living things. It occurs by virtue of informational coupling among component parts and processes and can be quite specific (as when cells in the brain resonate to signals in the environment) or nonspecific (as when simple diffusion creates a source–sink dynamic for gene networks). Existing theoretical models of coordination—from bacteria to brains to social groups—typically focus on systems with very large numbers of elements (N→∞) or systems with only a few elements coupled together (typically N = 2). Though sharing a common inspiration in Nature’s propensity to generate dynamic patterns, both approaches have proceeded largely independent of each other. Ideally, one would like a theory that applies to phenomena observed on all scales. Recent experimental research by Mengsen Zhang and colleagues on intermediate-sized ensembles (in between the few and the many) proves to be the key to uniting large- and small-scale theories of coordination. Disorder–order transitions, multistability, order–order phase transitions, and especially metastability are shown to figure prominently on multiple levels of description, suggestive of a basic Coordination Dynamics that operates on all scales. This unified coordination dynamics turns out to be a marriage of two well-known models of large- and small-scale coordination: the former based on statistical mechanics (Kuramoto) and the latter based on the concepts of Synergetics and nonlinear dynamics (extended Haken–Kelso–Bunz or HKB). We show that models of the many and the few, previously quite unconnected, are thereby unified in a single formulation. The research has led to novel topological methods to handle the higher-dimensional dynamics of coordination in complex systems and has implications not only for understanding coordination but also for the design of (biorhythm inspired) computers. Full article
(This article belongs to the Special Issue Information and Self-Organization II)
Show Figures

Figure 1

17 pages, 1122 KiB  
Article
α-Geodesical Skew Divergence
by Masanari Kimura and Hideitsu Hino
Entropy 2021, 23(5), 528; https://doi.org/10.3390/e23050528 - 25 Apr 2021
Cited by 4 | Viewed by 2463
Abstract
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ, with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be [...] Read more.
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ, with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be absolutely continuous with respect to the source distribution. In this paper, an information geometric generalization of the skew divergence called the α-geodesical skew divergence is proposed, and its properties are studied. Full article
Show Figures

Figure 1

10 pages, 2520 KiB  
Article
Information-Efficient, Off-Center Sampling Results in Improved Precision in 3D Single-Particle Tracking Microscopy
by Chen Zhang and Kevin Welsher
Entropy 2021, 23(5), 498; https://doi.org/10.3390/e23050498 - 22 Apr 2021
Cited by 8 | Viewed by 2690
Abstract
In this work, we present a 3D single-particle tracking system that can apply tailored sampling patterns to selectively extract photons that yield the most information for particle localization. We demonstrate that off-center sampling at locations predicted by Fisher information utilizes photons most efficiently. [...] Read more.
In this work, we present a 3D single-particle tracking system that can apply tailored sampling patterns to selectively extract photons that yield the most information for particle localization. We demonstrate that off-center sampling at locations predicted by Fisher information utilizes photons most efficiently. When performing localization in a single dimension, optimized off-center sampling patterns gave doubled precision compared to uniform sampling. A ~20% increase in precision compared to uniform sampling can be achieved when a similar off-center pattern is used in 3D localization. Here, we systematically investigated the photon efficiency of different emission patterns in a diffraction-limited system and achieved higher precision than uniform sampling. The ability to maximize information from the limited number of photons demonstrated here is critical for particle tracking applications in biological samples, where photons may be limited. Full article
(This article belongs to the Special Issue Recent Advances in Single-Particle Tracking: Experiment and Analysis)
Show Figures

Figure 1

13 pages, 3015 KiB  
Article
Novel Features for Binary Time Series Based on Branch Length Similarity Entropy
by Sang-Hee Lee and Cheol-Min Park
Entropy 2021, 23(4), 480; https://doi.org/10.3390/e23040480 - 18 Apr 2021
Cited by 2 | Viewed by 2442
Abstract
Branch length similarity (BLS) entropy is defined in a network consisting of a single node and branches. In this study, we mapped the binary time-series signal to the circumference of the time circle so that the BLS entropy can be calculated for the [...] Read more.
Branch length similarity (BLS) entropy is defined in a network consisting of a single node and branches. In this study, we mapped the binary time-series signal to the circumference of the time circle so that the BLS entropy can be calculated for the binary time-series. We obtained the BLS entropy values for “1” signals on the time circle. The set of values are the BLS entropy profile. We selected the local maximum (minimum) point, slope, and inflection point of the entropy profile as the characteristic features of the binary time-series and investigated and explored their significance. The local maximum (minimum) point indicates the time at which the rate of change in the signal density becomes zero. The slope and inflection points correspond to the degree of change in the signal density and the time at which the signal density changes occur, respectively. Moreover, we show that the characteristic features can be widely used in binary time-series analysis by characterizing the movement trajectory of Caenorhabditis elegans. We also mention the problems that need to be explored mathematically in relation to the features and propose candidates for additional features based on the BLS entropy profile. Full article
Show Figures

Figure 1

20 pages, 879 KiB  
Article
Entanglement and Non-Locality in Quantum Protocols with Identical Particles
by Fabio Benatti, Roberto Floreanini and Ugo Marzolino
Entropy 2021, 23(4), 479; https://doi.org/10.3390/e23040479 - 18 Apr 2021
Cited by 5 | Viewed by 2809
Abstract
We study the role of entanglement and non-locality in quantum protocols that make use of systems of identical particles. Unlike in the case of distinguishable particles, the notions of entanglement and non-locality for systems whose constituents cannot be distinguished and singly addressed are [...] Read more.
We study the role of entanglement and non-locality in quantum protocols that make use of systems of identical particles. Unlike in the case of distinguishable particles, the notions of entanglement and non-locality for systems whose constituents cannot be distinguished and singly addressed are still debated. We clarify why the only approach that avoids incongruities and paradoxes is the one based on the second quantization formalism, whereby it is the entanglement of the modes that can be populated by the particles that really matters and not the particles themselves. Indeed, by means of a metrological and of a teleportation protocol, we show that inconsistencies arise in formulations that force entanglement and non-locality to be properties of the identical particles rather than of the modes they can occupy. The reason resides in the fact that orthogonal modes can always be addressed while identical particles cannot. Full article
(This article belongs to the Special Issue Quantum Information and Quantum Optics)
17 pages, 3153 KiB  
Article
Low-Frequency Seismic Noise Properties in the Japanese Islands
by Alexey Lyubushin
Entropy 2021, 23(4), 474; https://doi.org/10.3390/e23040474 - 16 Apr 2021
Cited by 9 | Viewed by 4279
Abstract
The records of seismic noise in Japan for the period of 1997–2020, which includes the Tohoku seismic catastrophe on 11 March 2011, are considered. The following properties of noise are analyzed: The wavelet-based Donoho–Johnston index, the singularity spectrum support width, and the entropy [...] Read more.
The records of seismic noise in Japan for the period of 1997–2020, which includes the Tohoku seismic catastrophe on 11 March 2011, are considered. The following properties of noise are analyzed: The wavelet-based Donoho–Johnston index, the singularity spectrum support width, and the entropy of the wavelet coefficients. The question of whether precursors of strong earthquakes can be formulated on their basis is investigated. Attention is paid to the time interval after the Tohoku mega-earthquake to the trends in the mean properties of low-frequency seismic noise, which reflect the constant simplification of the statistical structure of seismic vibrations. Estimates of two-dimensional probability densities of extreme values are presented, which highlight the places in which extreme values of seismic noise properties are most often realized. The estimates of the probability densities of extreme values coincide with each other and have a maximum in the region: 30° N  Lat  34° N, 136° E  Lon 140° E. The main conclusions of the conducted studies are that the preparation of a strong earthquake is accompanied by a simplification of the structure of seismic noise. It is shown that bursts of coherence between the time series of the day length and the noise properties within annual time window precede bursts of released seismic energy. The value of the lag in the release of seismic energy relative to bursts of coherence is about 1.5 years, which can be used to declare a time interval of high seismic hazard after reaching the peak of coherence. Full article
(This article belongs to the Special Issue Complex Systems Time Series Analysis and Modeling for Geoscience)
Show Figures

Figure 1

28 pages, 1106 KiB  
Article
On a Variational Definition for the Jensen-Shannon Symmetrization of Distances Based on the Information Radius
by Frank Nielsen
Entropy 2021, 23(4), 464; https://doi.org/10.3390/e23040464 - 14 Apr 2021
Cited by 17 | Viewed by 7944
Abstract
We generalize the Jensen-Shannon divergence and the Jensen-Shannon diversity index by considering a variational definition with respect to a generic mean, thereby extending the notion of Sibson’s information radius. The variational definition applies to any arbitrary distance and yields a new way to [...] Read more.
We generalize the Jensen-Shannon divergence and the Jensen-Shannon diversity index by considering a variational definition with respect to a generic mean, thereby extending the notion of Sibson’s information radius. The variational definition applies to any arbitrary distance and yields a new way to define a Jensen-Shannon symmetrization of distances. When the variational optimization is further constrained to belong to prescribed families of probability measures, we get relative Jensen-Shannon divergences and their equivalent Jensen-Shannon symmetrizations of distances that generalize the concept of information projections. Finally, we touch upon applications of these variational Jensen-Shannon divergences and diversity indices to clustering and quantization tasks of probability measures, including statistical mixtures. Full article
Show Figures

Graphical abstract

14 pages, 1344 KiB  
Article
Federated Quantum Machine Learning
by Samuel Yen-Chi Chen and Shinjae Yoo
Entropy 2021, 23(4), 460; https://doi.org/10.3390/e23040460 - 13 Apr 2021
Cited by 48 | Viewed by 8060
Abstract
Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is located. One of the [...] Read more.
Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is located. One of the potential schemes to achieve this property is the federated learning (FL), which consists of several clients or local nodes learning on their own data and a central node to aggregate the models collected from those local nodes. However, to the best of our knowledge, no work has been done in quantum machine learning (QML) in federation setting yet. In this work, we present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training. It demonstrates a promising future research direction for scaling and privacy aspects. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

40 pages, 1007 KiB  
Article
Phase Transitions in Transfer Learning for High-Dimensional Perceptrons
by Oussama Dhifallah and Yue M. Lu
Entropy 2021, 23(4), 400; https://doi.org/10.3390/e23040400 - 27 Mar 2021
Cited by 5 | Viewed by 3441
Abstract
Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to [...] Read more.
Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In this paper, we present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks. Despite the simplicity of our model, it reproduces several key phenomena observed in practice. Specifically, our asymptotic analysis reveals a phase transition from negative transfer to positive transfer as the similarity of the two tasks moves past a well-defined threshold. Full article
Show Figures

Figure 1

14 pages, 4567 KiB  
Article
Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
by Ramya Akula and Ivan Garibay
Entropy 2021, 23(4), 394; https://doi.org/10.3390/e23040394 - 26 Mar 2021
Cited by 28 | Viewed by 15966
Abstract
With the online presence of more than half the world population, social media plays a very important role in the lives of individuals as well as businesses alike. Social media enables businesses to advertise their products, build brand value, and reach out to [...] Read more.
With the online presence of more than half the world population, social media plays a very important role in the lives of individuals as well as businesses alike. Social media enables businesses to advertise their products, build brand value, and reach out to their customers. To leverage these social media platforms, it is important for businesses to process customer feedback in the form of posts and tweets. Sentiment analysis is the process of identifying the emotion, either positive, negative or neutral, associated with these social media texts. The presence of sarcasm in texts is the main hindrance in the performance of sentiment analysis. Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant, with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions make sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text. We show the effectiveness of our approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. Models trained using our proposed approach are easily interpretable and enable identifying sarcastic cues in the input text which contribute to the final classification score. We visualize the learned attention weights on a few sample input texts to showcase the effectiveness and interpretability of our model. Full article
Show Figures

Figure 1

17 pages, 1936 KiB  
Article
Causality and Information Transfer Between the Solar Wind and the Magnetosphere–Ionosphere System
by Pouya Manshour, Georgios Balasis, Giuseppe Consolini, Constantinos Papadimitriou and Milan Paluš
Entropy 2021, 23(4), 390; https://doi.org/10.3390/e23040390 - 25 Mar 2021
Cited by 20 | Viewed by 3113
Abstract
An information-theoretic approach for detecting causality and information transfer is used to identify interactions of solar activity and interplanetary medium conditions with the Earth’s magnetosphere–ionosphere systems. A causal information transfer from the solar wind parameters to geomagnetic indices is detected. The vertical component [...] Read more.
An information-theoretic approach for detecting causality and information transfer is used to identify interactions of solar activity and interplanetary medium conditions with the Earth’s magnetosphere–ionosphere systems. A causal information transfer from the solar wind parameters to geomagnetic indices is detected. The vertical component of the interplanetary magnetic field (Bz) influences the auroral electrojet (AE) index with an information transfer delay of 10 min and the geomagnetic disturbances at mid-latitudes measured by the symmetric field in the H component (SYM-H) index with a delay of about 30 min. Using a properly conditioned causality measure, no causal link between AE and SYM-H, or between magnetospheric substorms and magnetic storms can be detected. The observed causal relations can be described as linear time-delayed information transfer. Full article
Show Figures

Graphical abstract

14 pages, 326 KiB  
Article
On the Classical Capacity of General Quantum Gaussian Measurement
by Alexander Holevo
Entropy 2021, 23(3), 377; https://doi.org/10.3390/e23030377 - 21 Mar 2021
Cited by 7 | Viewed by 2396
Abstract
In this paper, we consider the classical capacity problem for Gaussian measurement channels. We establish Gaussianity of the average state of the optimal ensemble in the general case and discuss the Hypothesis of Gaussian Maximizers concerning the structure of the ensemble. Then, we [...] Read more.
In this paper, we consider the classical capacity problem for Gaussian measurement channels. We establish Gaussianity of the average state of the optimal ensemble in the general case and discuss the Hypothesis of Gaussian Maximizers concerning the structure of the ensemble. Then, we consider the case of one mode in detail, including the dual problem of accessible information of a Gaussian ensemble. Our findings are relevant to practical situations in quantum communications where the receiver is Gaussian (say, a general-dyne detection) and concatenation of the Gaussian channel and the receiver can be considered as one Gaussian measurement channel. Our efforts in this and preceding papers are then aimed at establishing full Gaussianity of the optimal ensemble (usually taken as an assumption) in such schemes. Full article
(This article belongs to the Special Issue Quantum Communication, Quantum Radar, and Quantum Cipher)
Show Figures

Figure 1

11 pages, 1340 KiB  
Article
Spectral Ranking of Causal Influence in Complex Systems
by Errol Zalmijn, Tom Heskes and Tom Claassen
Entropy 2021, 23(3), 369; https://doi.org/10.3390/e23030369 - 20 Mar 2021
Cited by 1 | Viewed by 3769
Abstract
Similar to natural complex systems, such as the Earth’s climate or a living cell, semiconductor lithography systems are characterized by nonlinear dynamics across more than a dozen orders of magnitude in space and time. Thousands of sensors measure relevant process variables at appropriate [...] Read more.
Similar to natural complex systems, such as the Earth’s climate or a living cell, semiconductor lithography systems are characterized by nonlinear dynamics across more than a dozen orders of magnitude in space and time. Thousands of sensors measure relevant process variables at appropriate sampling rates, to provide time series as primary sources for system diagnostics. However, high-dimensionality, non-linearity and non-stationarity of the data are major challenges to efficiently, yet accurately, diagnose rare or new system issues by merely using model-based approaches. To reliably narrow down the causal search space, we validate a ranking algorithm that applies transfer entropy for bivariate interaction analysis of a system’s multivariate time series to obtain a weighted directed graph, and graph eigenvector centrality to identify the system’s most important sources of original information or causal influence. The results suggest that this approach robustly identifies the true drivers or causes of a complex system’s deviant behavior, even when its reconstructed information transfer network includes redundant edges. Full article
Show Figures

Figure 1

18 pages, 3275 KiB  
Article
Stirling Refrigerating Machine Modeling Using Schmidt and Finite Physical Dimensions Thermodynamic Models: A Comparison with Experiments
by Cătălina Dobre, Lavinia Grosu, Alexandru Dobrovicescu, Georgiana Chişiu and Mihaela Constantin
Entropy 2021, 23(3), 368; https://doi.org/10.3390/e23030368 - 19 Mar 2021
Cited by 6 | Viewed by 2877
Abstract
The purpose of the study is to show that two simple models that take into account only the irreversibility due to temperature difference in the heat exchangers and imperfect regeneration are able to indicate refrigerating machine behavior. In the present paper, the finite [...] Read more.
The purpose of the study is to show that two simple models that take into account only the irreversibility due to temperature difference in the heat exchangers and imperfect regeneration are able to indicate refrigerating machine behavior. In the present paper, the finite physical dimensions thermodynamics (FPDT) method and 0-D modeling using the Schmidt model with imperfect regeneration were applied in the study of a β type Stirling refrigeration machine.The 0-D modeling is improved by including the irreversibility caused by imperfect regeneration and the finite temperature difference between the gas and the heat exchangers wall. A flowchart of the Stirling refrigerator exergy balance is presented to show the internal and external irreversibilities. It is found that the irreversibility at the regenerator level is more important than that at the heat exchangers level. The energies exchanged by the working gas are expressed according to the practical parameters, necessary for the engineer during the entire project. The results of the two thermodynamic models are presented in comparison with the experimental results, which leads to validation of the proposed FPDT model for the functional and constructive parameters of the studied refrigerating machine. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications II)
Show Figures

Figure 1

29 pages, 1389 KiB  
Article
Mechanism Integrated Information
by Leonardo S. Barbosa, William Marshall, Larissa Albantakis and Giulio Tononi
Entropy 2021, 23(3), 362; https://doi.org/10.3390/e23030362 - 18 Mar 2021
Cited by 23 | Viewed by 5391
Abstract
The Integrated Information Theory (IIT) of consciousness starts from essential phenomenological properties, which are then translated into postulates that any physical system must satisfy in order to specify the physical substrate of consciousness. We recently introduced an information measure (Barbosa et al., 2020) [...] Read more.
The Integrated Information Theory (IIT) of consciousness starts from essential phenomenological properties, which are then translated into postulates that any physical system must satisfy in order to specify the physical substrate of consciousness. We recently introduced an information measure (Barbosa et al., 2020) that captures three postulates of IIT—existence, intrinsicality and information—and is unique. Here we show that the new measure also satisfies the remaining postulates of IIT—integration and exclusion—and create the framework that identifies maximally irreducible mechanisms. These mechanisms can then form maximally irreducible systems, which in turn will specify the physical substrate of conscious experience. Full article
(This article belongs to the Special Issue Integrated Information Theory and Consciousness)
Show Figures

Graphical abstract

14 pages, 325 KiB  
Article
Critical Comparison of MaxCal and Other Stochastic Modeling Approaches in Analysis of Gene Networks
by Taylor Firman, Jonathan Huihui, Austin R. Clark and Kingshuk Ghosh
Entropy 2021, 23(3), 357; https://doi.org/10.3390/e23030357 - 17 Mar 2021
Cited by 1 | Viewed by 2245
Abstract
Learning the underlying details of a gene network with feedback is critical in designing new synthetic circuits. Yet, quantitative characterization of these circuits remains limited. This is due to the fact that experiments can only measure partial information from which the details of [...] Read more.
Learning the underlying details of a gene network with feedback is critical in designing new synthetic circuits. Yet, quantitative characterization of these circuits remains limited. This is due to the fact that experiments can only measure partial information from which the details of the circuit must be inferred. One potentially useful avenue is to harness hidden information from single-cell stochastic gene expression time trajectories measured for long periods of time—recorded at frequent intervals—over multiple cells. This raises the feasibility vs. accuracy dilemma while deciding between different models of mining these stochastic trajectories. We demonstrate that inference based on the Maximum Caliber (MaxCal) principle is the method of choice by critically evaluating its computational efficiency and accuracy against two other typical modeling approaches: (i) a detailed model (DM) with explicit consideration of multiple molecules including protein-promoter interaction, and (ii) a coarse-grain model (CGM) using Hill type functions to model feedback. MaxCal provides a reasonably accurate model while being significantly more computationally efficient than DM and CGM. Furthermore, MaxCal requires minimal assumptions since it is a top-down approach and allows systematic model improvement by including constraints of higher order, in contrast to traditional bottom-up approaches that require more parameters or ad hoc assumptions. Thus, based on efficiency, accuracy, and ability to build minimal models, we propose MaxCal as a superior alternative to traditional approaches (DM, CGM) when inferring underlying details of gene circuits with feedback from limited data. Full article
(This article belongs to the Special Issue Information Theory and Biology: Seeking General Principles)
16 pages, 5950 KiB  
Review
Beating Standard Quantum Limit with Weak Measurement
by Geng Chen, Peng Yin, Wen-Hao Zhang, Gong-Chu Li, Chuan-Feng Li and Guang-Can Guo
Entropy 2021, 23(3), 354; https://doi.org/10.3390/e23030354 - 16 Mar 2021
Cited by 11 | Viewed by 3026
Abstract
Weak measurements have been under intensive investigation in both experiment and theory. Numerous experiments have indicated that the amplified meter shift is produced by the post-selection, yielding an improved precision compared to conventional methods. However, this amplification effect comes at the cost of [...] Read more.
Weak measurements have been under intensive investigation in both experiment and theory. Numerous experiments have indicated that the amplified meter shift is produced by the post-selection, yielding an improved precision compared to conventional methods. However, this amplification effect comes at the cost of a reduced rate of acquiring data, which leads to an increasing uncertainty to determine the level of meter shift. From this point of view, a number of theoretical works have suggested that weak measurements cannot improve the precision, or even damage the metrology information due to the post-selection. In this review, we give a comprehensive analysis of the weak measurements to justify their positive effect on prompting measurement precision. As a further step, we introduce two modified weak measurement protocols to boost the precision beyond the standard quantum limit. Compared to previous works beating the standard quantum limit, these protocols are free of using entangled or squeezed states. The achieved precision outperforms that of the conventional method by two orders of magnitude and attains a practical Heisenberg scaling up to n=106 photons. Full article
Show Figures

Figure 1

29 pages, 535 KiB  
Article
Crowded Trades, Market Clustering, and Price Instability
by Marc van Kralingen, Diego Garlaschelli, Karolina Scholtus and Iman van Lelyveld
Entropy 2021, 23(3), 336; https://doi.org/10.3390/e23030336 - 12 Mar 2021
Cited by 3 | Viewed by 3820
Abstract
Crowded trades by similarly trading peers influence the dynamics of asset prices, possibly creating systemic risk. We propose a market clustering measure using granular trading data. For each stock, the clustering measure captures the degree of trading overlap among any two investors in [...] Read more.
Crowded trades by similarly trading peers influence the dynamics of asset prices, possibly creating systemic risk. We propose a market clustering measure using granular trading data. For each stock, the clustering measure captures the degree of trading overlap among any two investors in that stock, based on a comparison with the expected crowding in a null model where trades are maximally random while still respecting the empirical heterogeneity of both stocks and investors. We investigate the effect of crowded trades on stock price stability and present evidence that market clustering has a causal effect on the properties of the tails of the stock return distribution, particularly the positive tail, even after controlling for commonly considered risk drivers. Reduced investor pool diversity could thus negatively affect stock price stability. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Economics, Finance, and Management)
Show Figures

Figure 1

8 pages, 1014 KiB  
Article
Quantum Speed Limit and Divisibility of the Dynamical Map
by Jose Teittinen and Sabrina Maniscalco
Entropy 2021, 23(3), 331; https://doi.org/10.3390/e23030331 - 11 Mar 2021
Cited by 9 | Viewed by 2366
Abstract
The quantum speed limit (QSL) is the theoretical lower limit of the time for a quantum system to evolve from a given state to another one. Interestingly, it has been shown that non-Markovianity can be used to speed-up the dynamics and to lower [...] Read more.
The quantum speed limit (QSL) is the theoretical lower limit of the time for a quantum system to evolve from a given state to another one. Interestingly, it has been shown that non-Markovianity can be used to speed-up the dynamics and to lower the QSL time, although this behaviour is not universal. In this paper, we further carry on the investigation on the connection between QSL and non-Markovianity by looking at the effects of P- and CP-divisibility of the dynamical map to the quantum speed limit. We show that the speed-up can also be observed under P- and CP-divisible dynamics, and that the speed-up is not necessarily tied to the transition from P-divisible to non-P-divisible dynamics. Full article
(This article belongs to the Special Issue Quantum Information Concepts in Open Quantum Systems)
Show Figures

Figure 1

19 pages, 362 KiB  
Review
Information Theory for Agents in Artificial Intelligence, Psychology, and Economics
by Michael S. Harré
Entropy 2021, 23(3), 310; https://doi.org/10.3390/e23030310 - 06 Mar 2021
Cited by 14 | Viewed by 5444
Abstract
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and [...] Read more.
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called ‘small worlds’ but cannot work in ‘large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in. Full article
Show Figures

Graphical abstract

32 pages, 4858 KiB  
Article
Why Do Big Data and Machine Learning Entail the Fractional Dynamics?
by Haoyu Niu, YangQuan Chen and Bruce J. West
Entropy 2021, 23(3), 297; https://doi.org/10.3390/e23030297 - 28 Feb 2021
Cited by 19 | Viewed by 5153
Abstract
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex [...] Read more.
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex systems, increase the performance of optimization, and even extend the enabling of the potential for creativity. In this article, the authors discuss the fractional dynamics, FOT and rich fractional stochastic models. First, the use of fractional dynamics in big data analytics for quantifying big data variability stemming from the generation of complex systems is justified. Second, we show why fractional dynamics is needed in machine learning and optimal randomness when asking: “is there a more optimal way to optimize?”. Third, an optimal randomness case study for a stochastic configuration network (SCN) machine-learning method with heavy-tailed distributions is discussed. Finally, views on big data and (physics-informed) machine learning with fractional dynamics for future research are presented with concluding remarks. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

19 pages, 5707 KiB  
Article
Non-Extensive Statistical Analysis of Acoustic Emissions: The Variability of Entropic Index q during Loading of Brittle Materials Until Fracture
by Andronikos Loukidis, Dimos Triantis and Ilias Stavrakas
Entropy 2021, 23(3), 276; https://doi.org/10.3390/e23030276 - 25 Feb 2021
Cited by 4 | Viewed by 1659
Abstract
Non-extensive statistical mechanics (NESM), introduced by Tsallis based on the principle of non-additive entropy, is a generalisation of the Boltzmann–Gibbs statistics. NESM has been shown to provide the necessary theoretical and analytical implementation for studying complex systems such as the fracture mechanisms and [...] Read more.
Non-extensive statistical mechanics (NESM), introduced by Tsallis based on the principle of non-additive entropy, is a generalisation of the Boltzmann–Gibbs statistics. NESM has been shown to provide the necessary theoretical and analytical implementation for studying complex systems such as the fracture mechanisms and crack evolution processes that occur in mechanically loaded specimens of brittle materials. In the current work, acoustic emission (AE) data recorded when marble and cement mortar specimens were subjected to three distinct loading protocols until fracture, are discussed in the context of NESM. The NESM analysis showed that the cumulative distribution functions of the AE interevent times (i.e., the time interval between successive AE hits) follow a q-exponential function. For each examined specimen, the corresponding Tsallis entropic q-indices and the parameters βq and τq were calculated. The entropic index q shows a systematic behaviour strongly related to the various stages of the implemented loading protocols for all the examined specimens. Results seem to support the idea of using the entropic index q as a potential pre-failure indicator for the impending catastrophic fracture of the mechanically loaded specimens. Full article
(This article belongs to the Special Issue Complex Systems Time Series Analysis and Modeling for Geoscience)
Show Figures

Figure 1

33 pages, 477 KiB  
Article
Trade-offs between Error Exponents and Excess-Rate Exponents of Typical Slepian–Wolf Codes
by Ran Tamir (Averbuch) and Neri Merhav
Entropy 2021, 23(3), 265; https://doi.org/10.3390/e23030265 - 24 Feb 2021
Cited by 5 | Viewed by 1717
Abstract
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In [...] Read more.
Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity. Full article
(This article belongs to the Special Issue Multiuser Information Theory III)
Show Figures

Figure 1

30 pages, 628 KiB  
Article
Lapsing Quickly into Fatalism: Bell on Backward Causation
by Travis Norsen and Huw Price
Entropy 2021, 23(2), 251; https://doi.org/10.3390/e23020251 - 22 Feb 2021
Cited by 1 | Viewed by 3575
Abstract
This is a dialogue between Huw Price and Travis Norsen, loosely inspired by a letter that Price received from J. S. Bell in 1988. The main topic of discussion is Bell’s views about retrocausal approaches to quantum theory and their relevance to contemporary [...] Read more.
This is a dialogue between Huw Price and Travis Norsen, loosely inspired by a letter that Price received from J. S. Bell in 1988. The main topic of discussion is Bell’s views about retrocausal approaches to quantum theory and their relevance to contemporary issues. Full article
(This article belongs to the Special Issue Quantum Theory and Causation)
Show Figures

Figure 1

14 pages, 1815 KiB  
Article
Signature of Generalized Gibbs Ensemble Deviation from Equilibrium: Negative Absorption Induced by a Local Quench
by Lorenzo Rossi, Fabrizio Dolcini, Fabio Cavaliere, Niccolò Traverso Ziani, Maura Sassetti and Fausto Rossi
Entropy 2021, 23(2), 220; https://doi.org/10.3390/e23020220 - 11 Feb 2021
Cited by 7 | Viewed by 1715
Abstract
When a parameter quench is performed in an isolated quantum system with a complete set of constants of motion, its out of equilibrium dynamics is considered to be well captured by the Generalized Gibbs Ensemble (GGE), characterized by a set [...] Read more.
When a parameter quench is performed in an isolated quantum system with a complete set of constants of motion, its out of equilibrium dynamics is considered to be well captured by the Generalized Gibbs Ensemble (GGE), characterized by a set {λα} of coefficients related to the constants of motion. We determine the most elementary GGE deviation from the equilibrium distribution that leads to detectable effects. By quenching a suitable local attractive potential in a one-dimensional electron system, the resulting GGE differs from equilibrium by only one single λα, corresponding to the emergence of an only partially occupied bound state lying below a fully occupied continuum of states. The effect is shown to induce optical gain, i.e., a negative peak in the absorption spectrum, indicating the stimulated emission of radiation, enabling one to identify GGE signatures in fermionic systems through optical measurements. We discuss the implementation in realistic setups. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

13 pages, 363 KiB  
Article
Selection of Embedding Dimension and Delay Time in Phase Space Reconstruction via Symbolic Dynamics
by Mariano Matilla-García, Isidro Morales, Jose Miguel Rodríguez and Manuel Ruiz Marín
Entropy 2021, 23(2), 221; https://doi.org/10.3390/e23020221 - 11 Feb 2021
Cited by 25 | Viewed by 3710
Abstract
The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ and embedding dimension p [...] Read more.
The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ and embedding dimension p for phase space reconstruction. The value of τ can be estimated from the Mutual Information, but this method is rather cumbersome computationally. Additionally, some researchers have recommended that τ should be chosen to be dependent on the embedding dimension p by means of an appropriate value for the time delay τw=(p1)τ, which is the optimal time delay for independence of the time series. The C-C method, based on Correlation Integral, is a method simpler than Mutual Information and has been proposed to select optimally τw and τ. In this paper, we suggest a simple method for estimating τ and τw based on symbolic analysis and symbolic entropy. As in the C-C method, τ is estimated as the first local optimal time delay and τw as the time delay for independence of the time series. The method is applied to several chaotic time series that are the base of comparison for several techniques. The numerical simulations for these systems verify that the proposed symbolic-based method is useful for practitioners and, according to the studied models, has a better performance than the C-C method for the choice of the time delay and embedding dimension. In addition, the method is applied to EEG data in order to study and compare some dynamic characteristics of brain activity under epileptic episodes Full article
(This article belongs to the Special Issue Information theory and Symbolic Analysis: Theory and Applications)
Show Figures

Figure 1

35 pages, 1675 KiB  
Review
The Entropy Universe
by Maria Ribeiro, Teresa Henriques, Luísa Castro, André Souto, Luís Antunes, Cristina Costa-Santos and Andreia Teixeira
Entropy 2021, 23(2), 222; https://doi.org/10.3390/e23020222 - 11 Feb 2021
Cited by 46 | Viewed by 10847
Abstract
About 160 years ago, the concept of entropy was introduced in thermodynamics by Rudolf Clausius. Since then, it has been continually extended, interpreted, and applied by researchers in many scientific fields, such as general physics, information theory, chaos theory, data mining, and mathematical [...] Read more.
About 160 years ago, the concept of entropy was introduced in thermodynamics by Rudolf Clausius. Since then, it has been continually extended, interpreted, and applied by researchers in many scientific fields, such as general physics, information theory, chaos theory, data mining, and mathematical linguistics. This paper presents The Entropy Universe, which aims to review the many variants of entropies applied to time-series. The purpose is to answer research questions such as: How did each entropy emerge? What is the mathematical definition of each variant of entropy? How are entropies related to each other? What are the most applied scientific fields for each entropy? We describe in-depth the relationship between the most applied entropies in time-series for different scientific fields, establishing bases for researchers to properly choose the variant of entropy most suitable for their data. The number of citations over the past sixteen years of each paper proposing a new entropy was also accessed. The Shannon/differential, the Tsallis, the sample, the permutation, and the approximate entropies were the most cited ones. Based on the ten research areas with the most significant number of records obtained in the Web of Science and Scopus, the areas in which the entropies are more applied are computer science, physics, mathematics, and engineering. The universe of entropies is growing each day, either due to the introducing new variants either due to novel applications. Knowing each entropy’s strengths and of limitations is essential to ensure the proper improvement of this research field. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Show Figures

Figure 1

33 pages, 432 KiB  
Article
The Principle of Covariance and the Hamiltonian Formulation of General Relativity
by Massimo Tessarotto and Claudio Cremaschini
Entropy 2021, 23(2), 215; https://doi.org/10.3390/e23020215 - 10 Feb 2021
Cited by 12 | Viewed by 2743
Abstract
The implications of the general covariance principle for the establishment of a Hamiltonian variational formulation of classical General Relativity are addressed. The analysis is performed in the framework of the Einstein-Hilbert variational theory. Preliminarily, customary Lagrangian variational principles are reviewed, pointing out the [...] Read more.
The implications of the general covariance principle for the establishment of a Hamiltonian variational formulation of classical General Relativity are addressed. The analysis is performed in the framework of the Einstein-Hilbert variational theory. Preliminarily, customary Lagrangian variational principles are reviewed, pointing out the existence of a novel variational formulation in which the class of variations remains unconstrained. As a second step, the conditions of validity of the non-manifestly covariant ADM variational theory are questioned. The main result concerns the proof of its intrinsic non-Hamiltonian character and the failure of this approach in providing a symplectic structure of space-time. In contrast, it is demonstrated that a solution reconciling the physical requirements of covariance and manifest covariance of variational theory with the existence of a classical Hamiltonian structure for the gravitational field can be reached in the framework of synchronous variational principles. Both path-integral and volume-integral realizations of the Hamilton variational principle are explicitly determined and the corresponding physical interpretations are pointed out. Full article
(This article belongs to the Special Issue Quantum Regularization of Singular Black Hole Solutions)
17 pages, 1594 KiB  
Article
The Role of Entropy in Construct Specification Equations (CSE) to Improve the Validity of Memory Tests
by Jeanette Melin, Stefan Cano and Leslie Pendrill
Entropy 2021, 23(2), 212; https://doi.org/10.3390/e23020212 - 09 Feb 2021
Cited by 16 | Viewed by 2500
Abstract
Commonly used rating scales and tests have been found lacking reliability and validity, for example in neurodegenerative diseases studies, owing to not making recourse to the inherent ordinality of human responses, nor acknowledging the separability of person ability and item difficulty parameters according [...] Read more.
Commonly used rating scales and tests have been found lacking reliability and validity, for example in neurodegenerative diseases studies, owing to not making recourse to the inherent ordinality of human responses, nor acknowledging the separability of person ability and item difficulty parameters according to the well-known Rasch model. Here, we adopt an information theory approach, particularly extending deployment of the classic Brillouin entropy expression when explaining the difficulty of recalling non-verbal sequences in memory tests (i.e., Corsi Block Test and Digit Span Test): a more ordered task, of less entropy, will generally be easier to perform. Construct specification equations (CSEs) as a part of a methodological development, with entropy-based variables dominating, are found experimentally to explain (r=R2 = 0.98) and predict the construct of task difficulty for short-term memory tests using data from the NeuroMET (n = 88) and Gothenburg MCI (n = 257) studies. We propose entropy-based equivalence criteria, whereby different tasks (in the form of items) from different tests can be combined, enabling new memory tests to be formed by choosing a bespoke selection of items, leading to more efficient testing, improved reliability (reduced uncertainties) and validity. This provides opportunities for more practical and accurate measurement in clinical practice, research and trials. Full article
(This article belongs to the Special Issue Entropy in Brain Networks)
Show Figures

Graphical abstract

16 pages, 6760 KiB  
Article
The Effect of a Hidden Source on the Estimation of Connectivity Networks from Multivariate Time Series
by Christos Koutlis and Dimitris Kugiumtzis
Entropy 2021, 23(2), 208; https://doi.org/10.3390/e23020208 - 08 Feb 2021
Cited by 2 | Viewed by 1590
Abstract
Many methods of Granger causality, or broadly termed connectivity, have been developed to assess the causal relationships between the system variables based only on the information extracted from the time series. The power of these methods to capture the true underlying connectivity structure [...] Read more.
Many methods of Granger causality, or broadly termed connectivity, have been developed to assess the causal relationships between the system variables based only on the information extracted from the time series. The power of these methods to capture the true underlying connectivity structure has been assessed using simulated dynamical systems where the ground truth is known. Here, we consider the presence of an unobserved variable that acts as a hidden source for the observed high-dimensional dynamical system and study the effect of the hidden source on the estimation of the connectivity structure. In particular, the focus is on estimating the direct causality effects in high-dimensional time series (not including the hidden source) of relatively short length. We examine the performance of a linear and a nonlinear connectivity measure using dimension reduction and compare them to a linear measure designed for latent variables. For the simulations, four systems are considered, the coupled Hénon maps system, the coupled Mackey–Glass system, the neural mass model and the vector autoregressive (VAR) process, each comprising 25 subsystems (variables for VAR) at close chain coupling structure and another subsystem (variable for VAR) driving all others acting as the hidden source. The results show that the direct causality measures estimate, in general terms, correctly the existing connectivity in the absence of the source when its driving is zero or weak, yet fail to detect the actual relationships when the driving is strong, with the nonlinear measure of dimension reduction performing best. An example from finance including and excluding the USA index in the global market indices highlights the different performance of the connectivity measures in the presence of hidden source. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

52 pages, 769 KiB  
Article
Error Exponents and α-Mutual Information
by Sergio Verdú
Entropy 2021, 23(2), 199; https://doi.org/10.3390/e23020199 - 05 Feb 2021
Cited by 6 | Viewed by 3413
Abstract
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms [...] Read more.
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint. Full article
Show Figures

Figure 1

25 pages, 1135 KiB  
Article
Spectral Properties of Effective Dynamics from Conditional Expectations
by Feliks Nüske, Péter Koltai, Lorenzo Boninsegna and Cecilia Clementi
Entropy 2021, 23(2), 134; https://doi.org/10.3390/e23020134 - 21 Jan 2021
Cited by 5 | Viewed by 2638
Abstract
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this [...] Read more.
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this paper, we are interested in the spectrum of the generator of the resulting effective dynamics, and how it compares to the spectrum of the full generator. We prove a new relative error bound in terms of the eigenfunction approximation error for reversible systems. We also present numerical examples indicating that, if Kramers–Moyal (KM) type approximations are used to compute the spectrum of the reduced generator, it seems largely insensitive to the time window used for the KM estimators. We analyze the implications of these observations for systems driven by underdamped Langevin dynamics, and show how meaningful effective dynamics can be defined in this setting. Full article
Show Figures

Figure 1

12 pages, 4032 KiB  
Article
On-Road Detection of Driver Fatigue and Drowsiness during Medium-Distance Journeys
by Luca Salvati, Matteo d’Amore, Anita Fiorentino, Arcangelo Pellegrino, Pasquale Sena and Francesco Villecco
Entropy 2021, 23(2), 135; https://doi.org/10.3390/e23020135 - 21 Jan 2021
Cited by 33 | Viewed by 3505
Abstract
Background: The detection of driver fatigue as a cause of sleepiness is a key technology capable of preventing fatal accidents. This research uses a fatigue-related sleepiness detection algorithm based on the analysis of the pulse rate variability generated by the heartbeat and [...] Read more.
Background: The detection of driver fatigue as a cause of sleepiness is a key technology capable of preventing fatal accidents. This research uses a fatigue-related sleepiness detection algorithm based on the analysis of the pulse rate variability generated by the heartbeat and validates the proposed method by comparing it with an objective indicator of sleepiness (PERCLOS). Methods: changes in alert conditions affect the autonomic nervous system (ANS) and therefore heart rate variability (HRV), modulated in the form of a wave and monitored to detect long-term changes in the driver’s condition using real-time control. Results: the performance of the algorithm was evaluated through an experiment carried out in a road vehicle. In this experiment, data was recorded by three participants during different driving sessions and their conditions of fatigue and sleepiness were documented on both a subjective and objective basis. The validation of the results through PERCLOS showed a 63% adherence to the experimental findings. Conclusions: the present study confirms the possibility of continuously monitoring the driver’s status through the detection of the activation/deactivation states of the ANS based on HRV. The proposed method can help prevent accidents caused by drowsiness while driving. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines II)
Show Figures

Figure 1

25 pages, 958 KiB  
Review
Dynamics of Ion Channels via Non-Hermitian Quantum Mechanics
by Tobias Gulden and Alex Kamenev
Entropy 2021, 23(1), 125; https://doi.org/10.3390/e23010125 - 19 Jan 2021
Cited by 2 | Viewed by 2856
Abstract
We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the [...] Read more.
We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the electric filed to stay mostly within the water-filled channel. Statistical mechanics of such Coulomb systems is dominated by entropic effects which may be accurately accounted for by mapping onto an effective quantum mechanics. In presence of multivalent ions the corresponding quantum mechanics appears to be non-Hermitian. In this review we discuss a framework for semiclassical calculations for the effective non-Hermitian Hamiltonians. Non-Hermiticity elevates WKB action integrals from the real line to closed cycles on a complex Riemann surfaces where direct calculations are not attainable. We circumvent this issue by applying tools from algebraic topology, such as the Picard-Fuchs equation. We discuss how its solutions relate to the thermodynamics and correlation functions of multivalent solutions within narrow, water-filled channels. Full article
Show Figures

Figure 1

29 pages, 693 KiB  
Article
Beyond Causal Explanation: Einstein’s Principle Not Reichenbach’s
by Michael Silberstein, William Mark Stuckey and Timothy McDevitt
Entropy 2021, 23(1), 114; https://doi.org/10.3390/e23010114 - 16 Jan 2021
Cited by 3 | Viewed by 4631
Abstract
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are [...] Read more.
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are problematic precisely because they do not fully transcend the assumption that causal or constructive explanation must always be fundamental. Unlike retrocausal accounts, our principle explanation is a complete rejection of Reichenbach’s Principle. Furthermore, we will argue that the basis for our principle account of quantum mechanics is the physical principle sought by quantum information theorists for their reconstructions of quantum mechanics. Finally, we explain why our account is both fully realist and psi-epistemic. Full article
(This article belongs to the Special Issue Quantum Theory and Causation)
Show Figures

Graphical abstract

18 pages, 36615 KiB  
Article
Coupling between Blood Pressure and Subarachnoid Space Width Oscillations during Slow Breathing
by Agnieszka Gruszecka, Magdalena K. Nuckowska, Monika Waskow, Jacek Kot, Pawel J. Winklewski, Wojciech Guminski, Andrzej F. Frydrychowski, Jerzy Wtorek, Adam Bujnowski, Piotr Lass, Tomislav Stankovski and Marcin Gruszecki
Entropy 2021, 23(1), 113; https://doi.org/10.3390/e23010113 - 15 Jan 2021
Cited by 6 | Viewed by 2905
Abstract
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width [...] Read more.
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width (SAS), collected during slow ventilation and ventilation against inspiration resistance. The experiment was performed on a group of 20 healthy volunteers (12 females and 8 males; BMI =22.1±3.2 kg/m2; age 25.3±7.9 years). We analysed the recorded signals with a wavelet transform. For the first time, a method based on dynamical Bayesian inference was used to detect the effective phase connectivity and the underlying coupling functions between the SAS and BP signals. There are several new findings. Slow breathing with or without resistance increases the strength of the coupling between the respiratory and cardiac components of both measured signals. We also observed increases in the strength of the coupling between the respiratory component of the BP and the cardiac component of the SAS and vice versa. Slow breathing synchronises the SAS oscillations, between the brain hemispheres. It also diminishes the similarity of the coupling between all analysed pairs of oscillators, while inspiratory resistance partially reverses this phenomenon. BP–SAS and SAS–BP interactions may reflect changes in the overall biomechanical characteristics of the brain. Full article
Show Figures

Figure 1

42 pages, 1487 KiB  
Review
Applications of Distributed-Order Fractional Operators: A Review
by Wei Ding, Sansit Patnaik, Sai Sidhardh and Fabio Semperlotti
Entropy 2021, 23(1), 110; https://doi.org/10.3390/e23010110 - 15 Jan 2021
Cited by 50 | Viewed by 4734
Abstract
Distributed-order fractional calculus (DOFC) is a rapidly emerging branch of the broader area of fractional calculus that has important and far-reaching applications for the modeling of complex systems. DOFC generalizes the intrinsic multiscale nature of constant and variable-order fractional operators opening significant opportunities [...] Read more.
Distributed-order fractional calculus (DOFC) is a rapidly emerging branch of the broader area of fractional calculus that has important and far-reaching applications for the modeling of complex systems. DOFC generalizes the intrinsic multiscale nature of constant and variable-order fractional operators opening significant opportunities to model systems whose behavior stems from the complex interplay and superposition of nonlocal and memory effects occurring over a multitude of scales. In recent years, a significant amount of studies focusing on mathematical aspects and real-world applications of DOFC have been produced. However, a systematic review of the available literature and of the state-of-the-art of DOFC as it pertains, specifically, to real-world applications is still lacking. This review article is intended to provide the reader a road map to understand the early development of DOFC and the progressive evolution and application to the modeling of complex real-world problems. The review starts by offering a brief introduction to the mathematics of DOFC, including analytical and numerical methods, and it continues providing an extensive overview of the applications of DOFC to fields like viscoelasticity, transport processes, and control theory that have seen most of the research activity to date. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

26 pages, 430 KiB  
Article
Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data
by Elisavet M. Sofikitou, Ray Liu, Huipei Wang and Marianthi Markatou
Entropy 2021, 23(1), 107; https://doi.org/10.3390/e23010107 - 14 Jan 2021
Cited by 1 | Viewed by 2492
Abstract
Pearson residuals aid the task of identifying model misspecification because they compare the estimated, using data, model with the model assumed under the null hypothesis. We present different formulations of the Pearson residual system that account for the measurement scale of the data [...] Read more.
Pearson residuals aid the task of identifying model misspecification because they compare the estimated, using data, model with the model assumed under the null hypothesis. We present different formulations of the Pearson residual system that account for the measurement scale of the data and study their properties. We further concentrate on the case of mixed-scale data, that is, data measured in both categorical and interval scale. We study the asymptotic properties and the robustness of minimum disparity estimators obtained in the case of mixed-scale data and exemplify the performance of the methods via simulation. Full article
Show Figures

Graphical abstract

18 pages, 2691 KiB  
Article
Deep Task-Based Quantization
by Nir Shlezinger and Yonina C. Eldar
Entropy 2021, 23(1), 104; https://doi.org/10.3390/e23010104 - 13 Jan 2021
Cited by 28 | Viewed by 4781
Abstract
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such [...] Read more.
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task. Full article
Show Figures

Figure 1

28 pages, 3278 KiB  
Review
High-Entropy Alloys for Advanced Nuclear Applications
by Ed J. Pickering, Alexander W. Carruthers, Paul J. Barron, Simon C. Middleburgh, David E. J. Armstrong and Amy S. Gandy
Entropy 2021, 23(1), 98; https://doi.org/10.3390/e23010098 - 11 Jan 2021
Cited by 135 | Viewed by 11801
Abstract
The expanded compositional freedom afforded by high-entropy alloys (HEAs) represents a unique opportunity for the design of alloys for advanced nuclear applications, in particular for applications where current engineering alloys fall short. This review assesses the work done to date in the field [...] Read more.
The expanded compositional freedom afforded by high-entropy alloys (HEAs) represents a unique opportunity for the design of alloys for advanced nuclear applications, in particular for applications where current engineering alloys fall short. This review assesses the work done to date in the field of HEAs for nuclear applications, provides critical insight into the conclusions drawn, and highlights possibilities and challenges for future study. It is found that our understanding of the irradiation responses of HEAs remains in its infancy, and much work is needed in order for our knowledge of any single HEA system to match our understanding of conventional alloys such as austenitic steels. A number of studies have suggested that HEAs possess ‘special’ irradiation damage resistance, although some of the proposed mechanisms, such as those based on sluggish diffusion and lattice distortion, remain somewhat unconvincing (certainly in terms of being universally applicable to all HEAs). Nevertheless, there may be some mechanisms and effects that are uniquely different in HEAs when compared to more conventional alloys, such as the effect that their poor thermal conductivities have on the displacement cascade. Furthermore, the opportunity to tune the compositions of HEAs over a large range to optimise particular irradiation responses could be very powerful, even if the design process remains challenging. Full article
(This article belongs to the Special Issue Future Directions of High Entropy Alloys)
Show Figures

Figure 1

12 pages, 687 KiB  
Article
Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels
by Agnieszka Pregowska
Entropy 2021, 23(1), 92; https://doi.org/10.3390/e23010092 - 10 Jan 2021
Cited by 11 | Viewed by 2273
Abstract
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates [...] Read more.
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

13 pages, 3000 KiB  
Article
Information Thermodynamics and Reducibility of Large Gene Networks
by Swarnavo Sarkar, Joseph B. Hubbard, Michael Halter and Anne L. Plant
Entropy 2021, 23(1), 63; https://doi.org/10.3390/e23010063 - 01 Jan 2021
Cited by 2 | Viewed by 2972
Abstract
Gene regulatory networks (GRNs) control biological processes like pluripotency, differentiation, and apoptosis. Omics methods can identify a large number of putative network components (on the order of hundreds or thousands) but it is possible that in many cases a small subset of genes [...] Read more.
Gene regulatory networks (GRNs) control biological processes like pluripotency, differentiation, and apoptosis. Omics methods can identify a large number of putative network components (on the order of hundreds or thousands) but it is possible that in many cases a small subset of genes control the state of GRNs. Here, we explore how the topology of the interactions between network components may indicate whether the effective state of a GRN can be represented by a small subset of genes. We use methods from information theory to model the regulatory interactions in GRNs as cascading and superposing information channels. We propose an information loss function that enables identification of the conditions by which a small set of genes can represent the state of all the other genes in the network. This information-theoretic analysis extends to a measure of free energy change due to communication within the network, which provides a new perspective on the reducibility of GRNs. Both the information loss and relative free energy depend on the density of interactions and edge communication error in a network. Therefore, this work indicates that a loss in mutual information between genes in a GRN is directly coupled to a thermodynamic cost, i.e., a reduction of relative free energy, of the system. Full article
(This article belongs to the Special Issue Thermodynamics of Life: Cells, Organisms and Evolution)
Show Figures

Figure 1

15 pages, 1188 KiB  
Review
Application of Biological Domain Knowledge Based Feature Selection on Gene Expression Data
by Malik Yousef, Abhishek Kumar and Burcu Bakir-Gungor
Entropy 2021, 23(1), 2; https://doi.org/10.3390/e23010002 - 22 Dec 2020
Cited by 39 | Viewed by 5065
Abstract
In the last two decades, there have been massive advancements in high throughput technologies, which resulted in the exponential growth of public repositories of gene expression datasets for various phenotypes. It is possible to unravel biomarkers by comparing the gene expression levels under [...] Read more.
In the last two decades, there have been massive advancements in high throughput technologies, which resulted in the exponential growth of public repositories of gene expression datasets for various phenotypes. It is possible to unravel biomarkers by comparing the gene expression levels under different conditions, such as disease vs. control, treated vs. not treated, drug A vs. drug B, etc. This problem refers to a well-studied problem in the machine learning domain, i.e., the feature selection problem. In biological data analysis, most of the computational feature selection methodologies were taken from other fields, without considering the nature of the biological data. Thus, integrative approaches that utilize the biological knowledge while performing feature selection are necessary for this kind of data. The main idea behind the integrative gene selection process is to generate a ranked list of genes considering both the statistical metrics that are applied to the gene expression data, and the biological background information which is provided as external datasets. One of the main goals of this review is to explore the existing methods that integrate different types of information in order to improve the identification of the biomolecular signatures of diseases and the discovery of new potential targets for treatment. These integrative approaches are expected to aid the prediction, diagnosis, and treatment of diseases, as well as to enlighten us on disease state dynamics, mechanisms of their onset and progression. The integration of various types of biological information will necessitate the development of novel techniques for integration and data analysis. Another aim of this review is to boost the bioinformatics community to develop new approaches for searching and determining significant groups/clusters of features based on one or more biological grouping functions. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

34 pages, 852 KiB  
Article
Generalised Geometric Brownian Motion: Theory and Applications to Option Pricing
by Viktor Stojkoski, Trifce Sandev, Lasko Basnarkov, Ljupco Kocarev and Ralf Metzler
Entropy 2020, 22(12), 1432; https://doi.org/10.3390/e22121432 - 18 Dec 2020
Cited by 37 | Viewed by 7344
Abstract
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when [...] Read more.
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when comparing its properties with empirical distributions. As a solution, we investigate a generalisation of GBM where the introduction of a memory kernel critically determines the behaviour of the stochastic process. We find the general expressions for the moments, log-moments, and the expectation of the periodic log returns, and then obtain the corresponding probability density functions using the subordination approach. Particularly, we consider subdiffusive GBM (sGBM), tempered sGBM, a mix of GBM and sGBM, and a mix of sGBMs. We utilise the resulting generalised GBM (gGBM) in order to examine the empirical performance of a selected group of kernels in the pricing of European call options. Our results indicate that the performance of a kernel ultimately depends on the maturity of the option and its moneyness. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Figure 1

14 pages, 1523 KiB  
Article
Examining the Causal Structures of Deep Neural Networks Using Information Theory
by Scythia Marrow, Eric J. Michaud and Erik Hoel
Entropy 2020, 22(12), 1429; https://doi.org/10.3390/e22121429 - 18 Dec 2020
Cited by 3 | Viewed by 4643
Abstract
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the [...] Read more.
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN’s causal structure as it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training. Specifically, we introduce the effective information (EI) of a feedforward DNN, which is the mutual information between layer input and output following a maximum-entropy perturbation. The EI can be used to assess the degree of causal influence nodes and edges have over their downstream targets in each layer. We show that the EI can be further decomposed in order to examine the sensitivity of a layer (measured by how well edges transmit perturbations) and the degeneracy of a layer (measured by how edge overlap interferes with transmission), along with estimates of the amount of integrated information of a layer. Together, these properties define where each layer lies in the “causal plane”, which can be used to visualize how layer connectivity becomes more sensitive or degenerate over time, and how integration changes during training, revealing how the layer-by-layer causal structure differentiates. These results may help in understanding the generalization capabilities of DNNs and provide foundational tools for making DNNs both more generalizable and more explainable. Full article
Show Figures

Figure 1

26 pages, 11199 KiB  
Article
A Comprehensive Framework for Uncovering Non-Linearity and Chaos in Financial Markets: Empirical Evidence for Four Major Stock Market Indices
by Lucia Inglada-Perez
Entropy 2020, 22(12), 1435; https://doi.org/10.3390/e22121435 - 18 Dec 2020
Cited by 15 | Viewed by 3230
Abstract
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market [...] Read more.
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market indices: namely Dow Jones Industrial Average, Ibex 35, Nasdaq-100 and Nikkei 225. To this end, a comprehensive framework has been adopted encompassing a wide range of techniques and the most suitable methods for the analysis of noisy time series. By using daily closing values from January 1992 to July 2013, this study employs twelve techniques and tools of which five are specific to detecting chaos. The findings show no clear evidence of chaos, suggesting that the behavior of financial markets is nonlinear and stochastic. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

20 pages, 357 KiB  
Article
Foundations of the Quaternion Quantum Mechanics
by Marek Danielewski and Lucjan Sapa
Entropy 2020, 22(12), 1424; https://doi.org/10.3390/e22121424 - 17 Dec 2020
Cited by 8 | Viewed by 4773
Abstract
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy [...] Read more.
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy theory (classical balance equations for isotropic Cauchy-elastic material) and using the Hamilton quaternion algebra, we present a rigorous derivation of the quaternion form of the non- and relativistic wave equations. The family of the wave equations and the Poisson equation are a straightforward consequence of the quaternion representation of the Cauchy model of the elastic continuum. This is the most general kind of quantum mechanics possessing the same kind of calculus of assertions as conventional quantum mechanics. The problem of the Schrödinger equation, where imaginary ‘i’ should emerge, is solved. This interpretation is a serious attempt to describe the ontology of quantum mechanics, and demonstrates that, besides Bohmian mechanics, the complete ontological interpretations of quantum theory exists. The model can be generalized and falsified. To ensure this theory to be true, we specified problems, allowing exposing its falsity. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
14 pages, 2346 KiB  
Article
Artificial Intelligence for Modeling Real Estate Price Using Call Detail Records and Hybrid Machine Learning Approach
by Gergo Pinter, Amir Mosavi and Imre Felde
Entropy 2020, 22(12), 1421; https://doi.org/10.3390/e22121421 - 16 Dec 2020
Cited by 25 | Viewed by 4764
Abstract
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel [...] Read more.
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers’ entropy, worker gyration, dwellers’ work distance, and workers’ home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott’s index (WI). The proposed model showed promising results revealing that the workers’ entropy and the dwellers’ work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers’ gyration, and the workers’ home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices. Full article
Show Figures

Figure 1

25 pages, 7942 KiB  
Article
Statistical Features in High-Frequency Bands of Interictal iEEG Work Efficiently in Identifying the Seizure Onset Zone in Patients with Focal Epilepsy
by Most. Sheuli Akter, Md. Rabiul Islam, Toshihisa Tanaka, Yasushi Iimura, Takumi Mitsuhashi, Hidenori Sugano, Duo Wang and Md. Khademul Islam Molla
Entropy 2020, 22(12), 1415; https://doi.org/10.3390/e22121415 - 15 Dec 2020
Cited by 13 | Viewed by 3820
Abstract
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the [...] Read more.
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the possible seizure onset zone (SOZ) channels. It is known that the activity of HFCs in interictal iEEGs, including ripple and fast ripple bands, is associated with epileptic seizures. This paper proposes to decompose multi-channel interictal iEEG signals into a number of subbands. For every 20 s segment, twelve features are computed from each subband. A mutual information (MI)-based method with grid search was applied to select the most prominent bands and features. A gradient-boosting decision tree-based algorithm called LightGBM was used to score each segment of the channels and these were averaged together to achieve a final score for each channel. The possible SOZ channels were localized based on the higher value channels. The experimental results with eleven epilepsy patients were tested to observe the efficiency of the proposed design compared to the state-of-the-art methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Back to TopTop