Next Issue
Volume 22, November
Previous Issue
Volume 22, September
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 22, Issue 10 (October 2020) – 125 articles

Cover Story (view full-size image): Thermodynamics was discovered by studying the movements of heat. The physics and mathematics of the 20th century have widened its interpretation, however, placing less dependence on conservation laws and highlighting the concept of macrostate and fluctuation accessibility in the large-deviation sense as the core concepts of the theory. A third key concept is the decomposition of irreversibility into contributions arising within a system and those arising from its environment. Remarkably, a decomposition exists that separates the intrinsic thermodynamics of the system from the extrinsic thermodynamics that embed the system into its environment. These elements are combined for population processes with multiple timescales, to give a theory that is about the emergence of macro-worlds, without special reference to conservation or time-reversibility. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 9166 KiB  
Article
Thermally Driven Flow of Water in Partially Heated Tall Vertical Concentric Annulus
by Jawed Mustafa, Saeed Alqaed and Mohammad Altamush Siddiqui
Entropy 2020, 22(10), 1189; https://doi.org/10.3390/e22101189 - 21 Oct 2020
Cited by 17 | Viewed by 2898
Abstract
Computational fluid dynamics (CFD) has become effective and crucial to several applications in science and engineering. The dynamic behavior of buoyancy induced flow of water in partially heated tall open-ended vertical annulus is analyzed based on a CFD technique. For a vertical annulus, [...] Read more.
Computational fluid dynamics (CFD) has become effective and crucial to several applications in science and engineering. The dynamic behavior of buoyancy induced flow of water in partially heated tall open-ended vertical annulus is analyzed based on a CFD technique. For a vertical annulus, the natural convective heat transfer has a broad application in engineering. The annulus is the most common structure used in various heat transmission systems, from the basic heat transfer device to the most sophisticated atomic reactors. The annular test sections of such a large aspect ratio are of practical importance in the design of equipment’s associated with the reactor systems. However, depending on the geometrical structure and heating conditions, it exhibits different flow behavior. The annulus may either be closed or open-ended. In this study, we carry out CFD analysis to examine the thermodynamics properties and the detailed thermal induced flow behavior of the water in Tall open-ended vertical concentric annuli. The purpose of this study is to evaluate the impact of a partially heating on mechanical properties and design parameters like Nusselt number, mass flow rate and pressure defect. For Rayleigh number ranging from 4.4 × 103 to 6.6 × 104, while the Prandtl number is 6.43, the numerical solution was obtained. The modelling result showing the measurement and transient behavior of different parameters is presented. The numerical results would be both qualitatively and quantitatively validated. The presentation of unstable state profiles and heat variables along the annulus are also discussed. Full article
(This article belongs to the Special Issue Reliability of Modern Electro-Mechanical Systems)
Show Figures

Figure 1

15 pages, 1417 KiB  
Article
Revised Stability Scales of the Postural Stability Index for Human Daily Activities
by Yu Ping Chang, Bernard C. Jiang and Nurul Retno Nurwulan
Entropy 2020, 22(10), 1188; https://doi.org/10.3390/e22101188 - 21 Oct 2020
Cited by 2 | Viewed by 1999
Abstract
Evaluation of human postural stability is important to prevent falls. Recent studies have been carried out to develop postural stability evaluation in an attempt to fall prevention. The postural stability index (PSI) was proposed as a measure to evaluate the stability of human [...] Read more.
Evaluation of human postural stability is important to prevent falls. Recent studies have been carried out to develop postural stability evaluation in an attempt to fall prevention. The postural stability index (PSI) was proposed as a measure to evaluate the stability of human postures in performing daily activities. The objective of this study was to use the PSI in developing the stability scales for human daily activities. The current study used two open datasets collected from mobile devices. In addition, we also conducted three experiments to evaluate the effect of age, velocity, step counts, and devices on PSI values. The collected datasets were preprocessed using the ensemble empirical mode decomposition (EEMD), then the complexity index from each intrinsic mode function (IMF) was calculated using the multiscale entropy (MSE). From the evaluation, it can be concluded that the PSI can be applied to do daily monitoring of postural stability for both young and older adults, and the PSI is not affected by age. The revised stability scales developed in this current study can give better suggestions to users than the original one. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications II)
Show Figures

Figure 1

18 pages, 3292 KiB  
Article
Entropy Rules: Molecular Dynamics Simulations of Model Oligomers for Thermoresponsive Polymers
by Alexander Kantardjiev and Petko M. Ivanov
Entropy 2020, 22(10), 1187; https://doi.org/10.3390/e22101187 - 21 Oct 2020
Cited by 8 | Viewed by 2834
Abstract
We attempted to attain atomic-scale insights into the mechanism of the heat-induced phase transition of two thermoresponsive polymers containing amide groups, poly(N-isopropylacrylamide) (PNIPAM) and poly(2-isopropyl-2-oxazoline) (PIPOZ), and we succeeded in reproducing the existence of lower critical solution temperature (LCST). The simulation [...] Read more.
We attempted to attain atomic-scale insights into the mechanism of the heat-induced phase transition of two thermoresponsive polymers containing amide groups, poly(N-isopropylacrylamide) (PNIPAM) and poly(2-isopropyl-2-oxazoline) (PIPOZ), and we succeeded in reproducing the existence of lower critical solution temperature (LCST). The simulation data are in accord with experimental findings. We found out that the entropy has an important contribution to the thermodynamics of the phase separation transition. Moreover, after decomposing further the entropy change to contributions from the solutes and from the solvent, it appeared out that the entropy of the solvent has the decisive share for the lowering of the free energy of the system when increasing the temperature above the LCST. Our conclusion is that the thermoresponsive behavior is driven by the entropy of the solvent. The water molecules structured around the functional groups of the polymer that are exposed to contact with the solvent in the extended conformation lower the enthalpy of the system, but at certain temperature the extended conformation of the polymer collapses as a result of dominating entropy gain from “released” water molecules. We stress also on the importance of using more than one reference molecule in the simulation box at the setup of the simulation. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

28 pages, 5650 KiB  
Article
Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences
by Ranjana Koshy and Ausif Mahmood
Entropy 2020, 22(10), 1186; https://doi.org/10.3390/e22101186 - 21 Oct 2020
Cited by 9 | Viewed by 4030
Abstract
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best [...] Read more.
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER. Full article
Show Figures

Figure 1

24 pages, 1378 KiB  
Article
Quantum Measurements with, and Yet without an Observer
by Dmitri Sokolovski
Entropy 2020, 22(10), 1185; https://doi.org/10.3390/e22101185 - 21 Oct 2020
Cited by 6 | Viewed by 2482
Abstract
It is argued that Feynman’s rules for evaluating probabilities, combined with von Neumann’s principle of psycho-physical parallelism, help avoid inconsistencies, often associated with quantum theory. The former allows one to assign probabilities to entire sequences of hypothetical Observers’ experiences, without mentioning the problem [...] Read more.
It is argued that Feynman’s rules for evaluating probabilities, combined with von Neumann’s principle of psycho-physical parallelism, help avoid inconsistencies, often associated with quantum theory. The former allows one to assign probabilities to entire sequences of hypothetical Observers’ experiences, without mentioning the problem of wave function collapse. The latter limits the Observer’s (e.g., Wigner’s friend’s) participation in a measurement to the changes produced in material objects, thus leaving his/her consciousness outside the picture. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

18 pages, 575 KiB  
Article
Two-Qubit Entanglement Generation through Non-Hermitian Hamiltonians Induced by Repeated Measurements on an Ancilla
by Roberto Grimaudo, Antonino Messina, Alessandro Sergi, Nikolay V. Vitanov and Sergey N. Filippov
Entropy 2020, 22(10), 1184; https://doi.org/10.3390/e22101184 - 20 Oct 2020
Cited by 18 | Viewed by 3359
Abstract
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian [...] Read more.
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian Hamiltonians as a result of repetitive measurements on an ancillary qubit has been proposed. The induced conditional dynamics of the main system is described by the effective non-Hermitian Hamiltonian arising from the procedure. In this paper, we demonstrate the effectiveness of such a protocol by applying it to physically relevant multi-spin models, showing that the effective non-Hermitian Hamiltonian drives the system to a maximally entangled stationary state. In addition, we report a new recipe to construct a physical scenario where the quantum dynamics of a physical system represented by a given non-Hermitian Hamiltonian model may be simulated. The physical implications and the broad scope potential applications of such a scheme are highlighted. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Figure 1

26 pages, 783 KiB  
Article
Behavioural Effects and Market Dynamics in Field and Laboratory Experimental Asset Markets
by Sandra Andraszewicz, Ke Wu and Didier Sornette
Entropy 2020, 22(10), 1183; https://doi.org/10.3390/e22101183 - 20 Oct 2020
Viewed by 2639
Abstract
A vast literature investigating behavioural underpinnings of financial bubbles and crashes relies on laboratory experiments. However, it is not yet clear how findings generated in a highly artificial environment relate to the human behaviour in the wild. It is of concern that the [...] Read more.
A vast literature investigating behavioural underpinnings of financial bubbles and crashes relies on laboratory experiments. However, it is not yet clear how findings generated in a highly artificial environment relate to the human behaviour in the wild. It is of concern that the laboratory setting may create a confound variable that impacts the experimental results. To explore the similarities and differences between human behaviour in the laboratory environment and in a realistic natural setting, with the same type of participants, we translate a field study conducted by reference (Sornette, D.; et al. Econ. E-J.2020, 14, 1–53) with trading rounds each lasting six full days to a laboratory experiment lasting two hours. The laboratory experiment replicates the key findings from the field study but we observe substantial differences in the market dynamics between the two settings. The replication of the results in the two distinct settings indicates that relaxing some of the laboratory control does not corrupt the main findings, while at the same time it offers several advantages such as the possibility to increase the number of participants interacting with each other at the same time and the number of traded securities. These findings pose important insights for future experiments investigating human behaviour in complex systems. Full article
Show Figures

Figure 1

17 pages, 1811 KiB  
Article
Information–Theoretic Radar Waveform Design under the SINR Constraint
by Yu Xiao, Zhenghong Deng and Tao Wu
Entropy 2020, 22(10), 1182; https://doi.org/10.3390/e22101182 - 20 Oct 2020
Cited by 6 | Viewed by 1946
Abstract
This study investigates the information–theoretic waveform design problem to improve radar performance in the presence of signal-dependent clutter environments. The goal was to study the waveform energy allocation strategies and provide guidance for radar waveform design through the trade-off relationship between the information [...] Read more.
This study investigates the information–theoretic waveform design problem to improve radar performance in the presence of signal-dependent clutter environments. The goal was to study the waveform energy allocation strategies and provide guidance for radar waveform design through the trade-off relationship between the information theory criterion and the signal-to-interference-plus-noise ratio (SINR) criterion. To this end, a model of the constraint relationship among the mutual information (MI), the Kullback–Leibler divergence (KLD), and the SINR is established in the frequency domain. The effects of the SINR value range on maximizing the MI and KLD under the energy constraint are derived. Under the constraints of energy and the SINR, the optimal radar waveform method based on maximizing the MI is proposed for radar estimation, with another method based on maximizing the KLD proposed for radar detection. The maximum MI value range is bounded by SINR and the maximum KLD value range is between 0 and the Jenson–Shannon divergence (J-divergence) value. Simulation results show that under the SINR constraint, the MI-based optimal signal waveform can make full use of the transmitted energy to target information extraction and put the signal energy in the frequency bin where the target spectrum is larger than the clutter spectrum. The KLD-based optimal signal waveform can therefore make full use of the transmitted energy to detect the target and put the signal energy in the frequency bin with the maximum target spectrum. Full article
(This article belongs to the Special Issue Information Theory, Forecasting, and Hypothesis Testing)
Show Figures

Figure 1

26 pages, 456 KiB  
Article
The Smoluchowski Ensemble—Statistical Mechanics of Aggregation
by Themis Matsoukas
Entropy 2020, 22(10), 1181; https://doi.org/10.3390/e22101181 - 20 Oct 2020
Cited by 5 | Viewed by 3509
Abstract
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean [...] Read more.
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean distribution in the mean-field approximation is governed by the Smoluchowski equation. In the scaling limit this ensemble gives rise to a set of relationships identical to those of familiar statistical thermodynamics. The central element of the thermodynamic treatment is the selection functional, a functional of feasible distributions that connects the probability of distribution to the details of the aggregation model. We obtain scaling expressions for general kernels and closed-form results for the special case of the constant, sum and product kernel. We study the stability of the most probable distribution, provide criteria for the sol-gel transition and obtain the distribution in the post-gel region by simple thermodynamic arguments. Full article
(This article belongs to the Special Issue Generalized Statistical Thermodynamics)
Show Figures

Figure 1

20 pages, 334 KiB  
Article
Unifying Aspects of Generalized Calculus
by Marek Czachor
Entropy 2020, 22(10), 1180; https://doi.org/10.3390/e22101180 - 19 Oct 2020
Cited by 13 | Viewed by 3058
Abstract
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate [...] Read more.
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate courses of mathematics. Its theoretical potential is huge, and yet it remains unknown or unappreciated. Full article
(This article belongs to the Special Issue The Statistical Foundations of Entropy)
Show Figures

Figure 1

23 pages, 3754 KiB  
Article
Space Emerges from What We Know—Spatial Categorisations Induced by Information Constraints
by Nicola Catenacci Volpi and Daniel Polani
Entropy 2020, 22(10), 1179; https://doi.org/10.3390/e22101179 - 19 Oct 2020
Cited by 2 | Viewed by 2678
Abstract
Seeking goals carried out by agents with a level of competency requires an “understanding” of the structure of their world. While abstract formal descriptions of a world structure in terms of geometric axioms can be formulated in principle, it is not likely that [...] Read more.
Seeking goals carried out by agents with a level of competency requires an “understanding” of the structure of their world. While abstract formal descriptions of a world structure in terms of geometric axioms can be formulated in principle, it is not likely that this is the representation that is actually employed by biological organisms or that should be used by biologically plausible models. Instead, we operate by the assumption that biological organisms are constrained in their information processing capacities, which in the past has led to a number of insightful hypotheses and models for biologically plausible behaviour generation. Here we use this approach to study various types of spatial categorizations that emerge through such informational constraints imposed on embodied agents. We will see that geometrically-rich spatial representations emerge when agents employ a trade-off between the minimisation of the Shannon information used to describe locations within the environment and the reduction of the location error generated by the resulting approximate spatial description. In addition, agents do not always need to construct these representations from the ground up, but they can obtain them by refining less precise spatial descriptions constructed previously. Importantly, we find that these can be optimal at both steps of refinement, as guaranteed by the successive refinement principle from information theory. Finally, clusters induced by these spatial representations via the information bottleneck method are able to reflect the environment’s topology without relying on an explicit geometric description of the environment’s structure. Our findings suggest that the fundamental geometric notions possessed by natural agents do not need to be part of their a priori knowledge but could emerge as a byproduct of the pressure to process information parsimoniously. Full article
Show Figures

Figure 1

4 pages, 178 KiB  
Editorial
Entropy Based Fatigue, Fracture, Failure Prediction and Structural Health Monitoring
by Cemal Basaran
Entropy 2020, 22(10), 1178; https://doi.org/10.3390/e22101178 - 19 Oct 2020
Cited by 7 | Viewed by 2263
Abstract
This special issue is dedicated to entropy-based fatigue, fracture, failure prediction and structural health monitoring[...] Full article
33 pages, 3460 KiB  
Article
Innovativeness of Industrial Processing Enterprises and Conjunctural Movement
by Aleksander Jakimowicz and Daniel Rzeczkowski
Entropy 2020, 22(10), 1177; https://doi.org/10.3390/e22101177 - 19 Oct 2020
Cited by 5 | Viewed by 2141
Abstract
Singulation of components determining the innovative activity of enterprises is a complex issue as it depends on both microeconomic and macroeconomic factors. The purpose of this article is to present the results of research on the impact of the mutual interactions between ownership [...] Read more.
Singulation of components determining the innovative activity of enterprises is a complex issue as it depends on both microeconomic and macroeconomic factors. The purpose of this article is to present the results of research on the impact of the mutual interactions between ownership and the size of companies on the achievement of the objectives of innovative activity by Polish industrial processing enterprises in changing cyclical conditions. The importance of innovation barriers was also assessed. Empirical data came from three periods that covered different phases of the business cycle: prosperity 2004–2006, global financial crisis 2008–2010, and recovery 2012–2014. The research used a cybernetic approach based on feedback loops presenting interactions between variables. In addition, two statistical methods were used: the Pearson’s χ2 independence test and correspondence analysis. The following discoveries were made during the research: (1) consideration of the combined impact of ownership and the size of companies on their innovation activities makes it possible to study phenomena that may be overlooked if the impact of these factors is considered separately; (2) public enterprises achieve significantly worse results in terms of innovation than companies from other ownership sectors; (3) the Red Queen effect, which assumes that the best innovative enterprises exert selection pressure on all other companies, applies to industrial processing companies, and in particular public enterprises; (4) the industrial processing section is more sensitive to secular trends than to cyclical fluctuations; (5) confirmation of occurrence of the Polish Green Island effect, which assumes that companies achieve good results in terms of innovation, irrespective of the phases of the business cycle; and (6) statistical evidence is provided that the global financial crisis may be associated with the turn of the Fifth and Sixth Kondratieff waves. Most likely, the role of the communication channel between the world economy and the Polish manufacturing section is fulfilled by foreign ownership, whose percentage of share capital of this section is estimated at 50%. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

16 pages, 9038 KiB  
Article
Transfer Entropy Analysis of Interactions between Bats Using Position and Echolocation Data
by Irena Shaffer and Nicole Abaid
Entropy 2020, 22(10), 1176; https://doi.org/10.3390/e22101176 - 19 Oct 2020
Cited by 4 | Viewed by 2040
Abstract
Many animal species, including many species of bats, exhibit collective behavior where groups of individuals coordinate their motion. Bats are unique among these animals in that they use the active sensing mechanism of echolocation as their primary means of navigation. Due to their [...] Read more.
Many animal species, including many species of bats, exhibit collective behavior where groups of individuals coordinate their motion. Bats are unique among these animals in that they use the active sensing mechanism of echolocation as their primary means of navigation. Due to their use of echolocation in large groups, bats run the risk of signal interference from sonar jamming. However, several species of bats have developed strategies to prevent interference, which may lead to different behavior when flying with conspecifics than when flying alone. This study seeks to explore the role of this acoustic sensing on the behavior of bat pairs flying together. Field data from a maternity colony of gray bats (Myotis grisescens) were collected using an array of cameras and microphones. These data were analyzed using the information theoretic measure of transfer entropy in order to quantify the interaction between pairs of bats and to determine the effect echolocation calls have on this interaction. This study expands on previous work that only computed information theoretic measures on the 3D position of bats without echolocation calls or that looked at the echolocation calls without using information theoretic analyses. Results show that there is evidence of information transfer between bats flying in pairs when time series for the speed of the bats and their turning behavior are used in the analysis. Unidirectional information transfer was found in some subsets of the data which could be evidence of a leader–follower interaction. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

10 pages, 882 KiB  
Article
Digital Quantum Simulation of Nonadiabatic Geometric Gates via Shortcuts to Adiabaticity
by Yapeng Wang, Yongcheng Ding, Jianan Wang and Xi Chen
Entropy 2020, 22(10), 1175; https://doi.org/10.3390/e22101175 - 19 Oct 2020
Cited by 3 | Viewed by 2595
Abstract
Geometric phases are used to construct quantum gates since it naturally resists local noises, acting as the modularized units of geometric quantum computing. Meanwhile, fast nonadiabatic geometric gates are required for reducing the information loss induced by decoherence. Here, we propose a digital [...] Read more.
Geometric phases are used to construct quantum gates since it naturally resists local noises, acting as the modularized units of geometric quantum computing. Meanwhile, fast nonadiabatic geometric gates are required for reducing the information loss induced by decoherence. Here, we propose a digital simulation of nonadiabatic geometric quantum gates in terms of shortcuts to adiabaticity (STA). More specifically, we combine the invariant-based inverse engineering with optimal control theory for designing the fast and robust Abelian geometric gates against systematic error, in the context of two-level qubit systems. We exemplify X and T gates, in which the fidelities and robustness are evaluated by simulations in ideal quantum circuits. Our results can also be extended to constructing two-qubit gates, for example, a controlled-PHASE gate, which shares the equivalent effective Hamiltonian with rotation around the Z-axis of a single qubit. These STA-inspired nonadiabatic geometric gates can realize quantum error correction physically, leading to fault-tolerant quantum computing in the Noisy Intermediate-Scale Quantum (NISQ) era. Full article
(This article belongs to the Special Issue Shortcuts to Adiabaticity)
Show Figures

Figure 1

49 pages, 11201 KiB  
Review
Salient Object Detection Techniques in Computer Vision—A Survey
by Ashish Kumar Gupta, Ayan Seal, Mukesh Prasad and Pritee Khanna
Entropy 2020, 22(10), 1174; https://doi.org/10.3390/e22101174 - 19 Oct 2020
Cited by 56 | Viewed by 12735
Abstract
Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field [...] Read more.
Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end. Full article
(This article belongs to the Special Issue Information Theory-Based Deep Learning Tools for Computer Vision)
Show Figures

Figure 1

15 pages, 4487 KiB  
Article
Application of Positional Entropy to Fast Shannon Entropy Estimation for Samples of Digital Signals
by Marcin Cholewa and Bartłomiej Płaczek
Entropy 2020, 22(10), 1173; https://doi.org/10.3390/e22101173 - 19 Oct 2020
Cited by 4 | Viewed by 2172
Abstract
This paper introduces a new method of estimating Shannon entropy. The proposed method can be successfully used for large data samples and enables fast computations to rank the data samples according to their Shannon entropy. Original definitions of positional entropy and integer entropy [...] Read more.
This paper introduces a new method of estimating Shannon entropy. The proposed method can be successfully used for large data samples and enables fast computations to rank the data samples according to their Shannon entropy. Original definitions of positional entropy and integer entropy are discussed in details to explain the theoretical concepts that underpin the proposed approach. Relations between positional entropy, integer entropy and Shannon entropy were demonstrated through computational experiments. The usefulness of the introduced method was experimentally verified for various data samples of different type and size. The experimental results clearly show that the proposed approach can be successfully used for fast entropy estimation. The analysis was also focused on quality of the entropy estimation. Several possible implementations of the proposed method were discussed. The presented algorithms were compared with the existing solutions. It was demonstrated that the algorithms presented in this paper estimate the Shannon entropy faster and more accurately than the state-of-the-art algorithms. Full article
Show Figures

Figure 1

1 pages, 163 KiB  
Correction
Correction: Biś, A., et al. Hausdorff Dimension and Topological Entropies of a Solenoid. Entropy 2020, 22, 506
by Andrzej Biś and Agnieszka Namiecińska
Entropy 2020, 22(10), 1172; https://doi.org/10.3390/e22101172 - 19 Oct 2020
Viewed by 1265
Abstract
The authors wish to make the following corrections to this paper [...] Full article
12 pages, 3082 KiB  
Article
Shape Effect of Nanosize Particles on Magnetohydrodynamic Nanofluid Flow and Heat Transfer over a Stretching Sheet with Entropy Generation
by Umair Rashid, Dumitru Baleanu, Azhar Iqbal and Muhammd Abbas
Entropy 2020, 22(10), 1171; https://doi.org/10.3390/e22101171 - 18 Oct 2020
Cited by 22 | Viewed by 2583
Abstract
Magnetohydrodynamic nanofluid technologies are emerging in several areas including pharmacology, medicine and lubrication (smart tribology). The present study discusses the heat transfer and entropy generation of magnetohydrodynamic (MHD) Ag-water nanofluid flow over a stretching sheet with the effect of nanoparticles shape. Three different [...] Read more.
Magnetohydrodynamic nanofluid technologies are emerging in several areas including pharmacology, medicine and lubrication (smart tribology). The present study discusses the heat transfer and entropy generation of magnetohydrodynamic (MHD) Ag-water nanofluid flow over a stretching sheet with the effect of nanoparticles shape. Three different geometries of nanoparticles—sphere, blade and lamina—are considered. The problem is modeled in the form of momentum, energy and entropy equations. The homotopy analysis method (HAM) is used to find the analytical solution of momentum, energy and entropy equations. The variations of velocity profile, temperature profile, Nusselt number and entropy generation with the influences of physical parameters are discussed in graphical form. The results show that the performance of lamina-shaped nanoparticles is better in temperature distribution, heat transfer and enhancement of the entropy generation. Full article
(This article belongs to the Special Issue Recent Advances of Entropy in Nanofluid Engineering)
Show Figures

Figure 1

25 pages, 1715 KiB  
Article
Segmentation of High Dimensional Time-Series Data Using Mixture of Sparse Principal Component Regression Model with Information Complexity
by Yaojin Sun and Hamparsum Bozdogan
Entropy 2020, 22(10), 1170; https://doi.org/10.3390/e22101170 - 17 Oct 2020
Cited by 5 | Viewed by 2434
Abstract
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our [...] Read more.
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our approach encompasses dimension reduction in high dimensional time-series data and, at the same time, determines the number of component clusters (i.e., number of segments across time-series data) and selects the best subset of predictors. A large-scale Monte Carlo simulation is performed to show the capability of the MIX-SPCR model to identify the correct structure of the time-series data successfully. MIX-SPCR model is also applied to a high dimensional Standard & Poor’s 500 (S&P 500) index data to uncover the time-series’s hidden structure and identify the structure change points. The approach presented in this paper determines both the relationships among the predictor variables and how various predictor variables contribute to the explanatory power of the response variable through the sparsity settings cluster wise. Full article
Show Figures

Figure 1

14 pages, 1008 KiB  
Article
Spreading of Competing Information in a Network
by Fabio Bagarello, Francesco Gargano and Francesco Oliveri
Entropy 2020, 22(10), 1169; https://doi.org/10.3390/e22101169 - 17 Oct 2020
Cited by 11 | Viewed by 1816
Abstract
We propose a simple approach to investigate the spreading of news in a network. In more detail, we consider two different versions of a single type of information, one of which is close to the essence of the information (and we call it [...] Read more.
We propose a simple approach to investigate the spreading of news in a network. In more detail, we consider two different versions of a single type of information, one of which is close to the essence of the information (and we call it good news), and another of which is somehow modified from some biased agent of the system (fake news, in our language). Good and fake news move around some agents, getting the original information and returning their own version of it to other agents of the network. Our main interest is to deduce the dynamics for such spreading, and to analyze if and under which conditions good news wins against fake news. The methodology is based on the use of ladder fermionic operators, which are quite efficient in modeling dispersion effects and interactions between the agents of the system. Full article
(This article belongs to the Special Issue Quantum Models of Cognition and Decision-Making)
Show Figures

Figure 1

15 pages, 2546 KiB  
Article
Knowledge Graph Completion for the Chinese Text of Cultural Relics Based on Bidirectional Encoder Representations from Transformers with Entity-Type Information
by Min Zhang, Guohua Geng, Sheng Zeng and Huaping Jia
Entropy 2020, 22(10), 1168; https://doi.org/10.3390/e22101168 - 16 Oct 2020
Cited by 8 | Viewed by 2913
Abstract
Knowledge graph completion can make knowledge graphs more complete, which is a meaningful research topic. However, the existing methods do not make full use of entity semantic information. Another challenge is that a deep model requires large-scale manually labelled data, which greatly increases [...] Read more.
Knowledge graph completion can make knowledge graphs more complete, which is a meaningful research topic. However, the existing methods do not make full use of entity semantic information. Another challenge is that a deep model requires large-scale manually labelled data, which greatly increases manual labour. In order to alleviate the scarcity of labelled data in the field of cultural relics and capture the rich semantic information of entities, this paper proposes a model based on the Bidirectional Encoder Representations from Transformers (BERT) with entity-type information for the knowledge graph completion of the Chinese texts of cultural relics. In this work, the knowledge graph completion task is treated as a classification task, while the entities, relations and entity-type information are integrated as a textual sequence, and the Chinese characters are used as a token unit in which input representation is constructed by summing token, segment and position embeddings. A small number of labelled data are used to pre-train the model, and then, a large number of unlabelled data are used to fine-tune the pre-training model. The experiment results show that the BERT-KGC model with entity-type information can enrich the semantics information of the entities to reduce the degree of ambiguity of the entities and relations to some degree and achieve more effective performance than the baselines in triple classification, link prediction and relation prediction tasks using 35% of the labelled data of cultural relics. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 10297 KiB  
Article
Kinematics and Dynamics of Turbulent Bands at Low Reynolds Numbers in Channel Flow
by Xiangkai Xiao and Baofang Song
Entropy 2020, 22(10), 1167; https://doi.org/10.3390/e22101167 - 16 Oct 2020
Cited by 6 | Viewed by 2151
Abstract
Channel flow turbulence exhibits interesting spatiotemporal complexities at transitional Reynolds numbers. In this paper, we investigated some aspects of the kinematics and dynamics of fully localized turbulent bands in large flow domains. We discussed the recent advancement in the understanding of the wave-generation [...] Read more.
Channel flow turbulence exhibits interesting spatiotemporal complexities at transitional Reynolds numbers. In this paper, we investigated some aspects of the kinematics and dynamics of fully localized turbulent bands in large flow domains. We discussed the recent advancement in the understanding of the wave-generation at the downstream end of fully localized bands. Based on the discussion, we proposed a possible mechanism for the tilt direction selection. We measured the propagation speed of the downstream end and the advection speed of the low-speed streaks in the bulk of turbulent bands at various Reynolds numbers. Instead of measuring the tilt angle by treating an entire band as a tilted object as in prior studies, we proposed that, from the point of view of the formation and growth of turbulent bands, the tilt angle should be determined by the relative speed between the downstream end and the streaks in the bulk. We obtained a good agreement between our calculation of the tilt angle and the reported results in the literature at relatively low Reynolds numbers. Full article
(This article belongs to the Special Issue Intermittency in Transitional Shear Flows)
Show Figures

Figure 1

15 pages, 841 KiB  
Article
Robustness of Internet of Battlefield Things (IoBT): A Directed Network Perspective
by Yuan Feng, Menglin Li, Chengyi Zeng and Hongfu Liu
Entropy 2020, 22(10), 1166; https://doi.org/10.3390/e22101166 - 16 Oct 2020
Cited by 12 | Viewed by 3660
Abstract
Through the combination of various intelligent devices and the Internet to form a large-scale network, the Internet of Things (IoT) realizes real-time information exchange and communication between devices. IoT technology is expected to play an essential role in improving the combat effectiveness and [...] Read more.
Through the combination of various intelligent devices and the Internet to form a large-scale network, the Internet of Things (IoT) realizes real-time information exchange and communication between devices. IoT technology is expected to play an essential role in improving the combat effectiveness and situation awareness ability of armies. The interconnection between combat equipment and other battlefield resources is referred to as the Internet of Battlefield Things (IoBT). Battlefield real-time data sharing and the cooperative decision-making among commanders are highly dependent on the connectivity between different combat units in the network. However, due to the wireless characteristics of communication, a large number of communication links are directly exposed in the complex battlefield environment, and various cyber or physical attacks threaten network connectivity. Therefore, the ability to maintain network connectivity under adversary attacks is a critical property for the IoBT. In this work, we propose a directed network model and connectivity measurement of the IoBT network. Then, we develop an optimal attack strategy optimization model to simulate the optimal attack behavior of the enemy. By comparing with the disintegration effect of some benchmark strategies, we verify the optimality of the model solution and find that the robustness of the IoBT network decreases rapidly with an increase of the unidirectional communication links in the network. The results show that the adversary will change the attack mode according to the parameter settings of attack resources and network communication link density. In order to enhance the network robustness, we need to adjust the defense strategy in time to deal with this change. Finally, we validated the model and theoretical analysis proposed in this paper through experiments on a real military network. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

29 pages, 3215 KiB  
Article
Thermo-Economic Evaluation of Aqua-Ammonia Solar Absorption Air Conditioning System Integrated with Various Collector Types
by Adil Al-Falahi, Falah Alobaid and Bernd Epple
Entropy 2020, 22(10), 1165; https://doi.org/10.3390/e22101165 - 16 Oct 2020
Cited by 5 | Viewed by 2869
Abstract
The main objective of this paper is to simulate solar absorption cooling systems that use ammonia mixture as a working fluid to produce cooling. In this study, we have considered different configurations based on the ammonia–water (NH3–H2O) cooling cycle [...] Read more.
The main objective of this paper is to simulate solar absorption cooling systems that use ammonia mixture as a working fluid to produce cooling. In this study, we have considered different configurations based on the ammonia–water (NH3–H2O) cooling cycle depending on the solar thermal technology: Evacuated tube collectors (ETC) and parabolic trough (PTC) solar collectors. To compare the configurations we have performed the energy, exergy, and economic analysis. The effect of heat source temperature on the critical parameters such as coefficient of performance (COP) and exegetic efficiency has been investigated for each configuration. Furthermore, the required optimum area and associated cost for each collector type have been determined. The methodology is applied in a specific case study for a sports arena with a 700~800 kW total cooling load. Results reveal that (PTC/NH3-H2O)configuration gives lower design aspects and minimum rates of hourly costs (USD 11.3/h) while (ETC/NH3-H2O) configuration (USD 12.16/h). (ETC/NH3-H2O) gives lower thermo-economic product cost (USD 0.14/GJ). The cycle coefficient of performance (COP) (of 0.5). Full article
(This article belongs to the Special Issue Advances in Solar Thermal Technologies)
Show Figures

Figure 1

20 pages, 696 KiB  
Article
Active Learning for Node Classification: An Evaluation
by Kaushalya Madhawa and Tsuyoshi Murata
Entropy 2020, 22(10), 1164; https://doi.org/10.3390/e22101164 - 16 Oct 2020
Cited by 19 | Viewed by 3490
Abstract
Current breakthroughs in the field of machine learning are fueled by the deployment of deep neural network models. Deep neural networks models are notorious for their dependence on large amounts of labeled data for training them. Active learning is being used as a [...] Read more.
Current breakthroughs in the field of machine learning are fueled by the deployment of deep neural network models. Deep neural networks models are notorious for their dependence on large amounts of labeled data for training them. Active learning is being used as a solution to train classification models with less labeled instances by selecting only the most informative instances for labeling. This is especially important when the labeled data are scarce or the labeling process is expensive. In this paper, we study the application of active learning on attributed graphs. In this setting, the data instances are represented as nodes of an attributed graph. Graph neural networks achieve the current state-of-the-art classification performance on attributed graphs. The performance of graph neural networks relies on the careful tuning of their hyperparameters, usually performed using a validation set, an additional set of labeled instances. In label scarce problems, it is realistic to use all labeled instances for training the model. In this setting, we perform a fair comparison of the existing active learning algorithms proposed for graph neural networks as well as other data types such as images and text. With empirical results, we demonstrate that state-of-the-art active learning algorithms designed for other data types do not perform well on graph-structured data. We study the problem within the framework of the exploration-vs.-exploitation trade-off and propose a new count-based exploration term. With empirical evidence on multiple benchmark graphs, we highlight the importance of complementing uncertainty-based active learning models with an exploration term. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Show Figures

Figure 1

12 pages, 243 KiB  
Perspective
Emergence of Organisms
by Andrea Roli and Stuart A. Kauffman
Entropy 2020, 22(10), 1163; https://doi.org/10.3390/e22101163 - 16 Oct 2020
Cited by 20 | Viewed by 5634
Abstract
Since early cybernetics studies by Wiener, Pask, and Ashby, the properties of living systems are subject to deep investigations. The goals of this endeavour are both understanding and building: abstract models and general principles are sought for describing organisms, their dynamics and their [...] Read more.
Since early cybernetics studies by Wiener, Pask, and Ashby, the properties of living systems are subject to deep investigations. The goals of this endeavour are both understanding and building: abstract models and general principles are sought for describing organisms, their dynamics and their ability to produce adaptive behavior. This research has achieved prominent results in fields such as artificial intelligence and artificial life. For example, today we have robots capable of exploring hostile environments with high level of self-sufficiency, planning capabilities and able to learn. Nevertheless, the discrepancy between the emergence and evolution of life and artificial systems is still huge. In this paper, we identify the fundamental elements that characterize the evolution of the biosphere and open-ended evolution, and we illustrate their implications for the evolution of artificial systems. Subsequently, we discuss the most relevant issues and questions that this viewpoint poses both for biological and artificial systems. Full article
(This article belongs to the Special Issue New Advances in Biocomplexity)
27 pages, 3098 KiB  
Article
A Labeling Method for Financial Time Series Prediction Based on Trends
by Dingming Wu, Xiaolong Wang, Jingyong Su, Buzhou Tang and Shaocong Wu
Entropy 2020, 22(10), 1162; https://doi.org/10.3390/e22101162 - 15 Oct 2020
Cited by 38 | Viewed by 8494
Abstract
Time series prediction has been widely applied to the finance industry in applications such as stock market price and commodity price forecasting. Machine learning methods have been widely used in financial time series prediction in recent years. How to label financial time series [...] Read more.
Time series prediction has been widely applied to the finance industry in applications such as stock market price and commodity price forecasting. Machine learning methods have been widely used in financial time series prediction in recent years. How to label financial time series data to determine the prediction accuracy of machine learning models and subsequently determine final investment returns is a hot topic. Existing labeling methods of financial time series mainly label data by comparing the current data with those of a short time period in the future. However, financial time series data are typically non-linear with obvious short-term randomness. Therefore, these labeling methods have not captured the continuous trend features of financial time series data, leading to a difference between their labeling results and real market trends. In this paper, a new labeling method called “continuous trend labeling” is proposed to address the above problem. In the feature preprocessing stage, this paper proposed a new method that can avoid the problem of look-ahead bias in traditional data standardization or normalization processes. Then, a detailed logical explanation was given, the definition of continuous trend labeling was proposed and also an automatic labeling algorithm was given to extract the continuous trend features of financial time series data. Experiments on the Shanghai Composite Index and Shenzhen Component Index and some stocks of China showed that our labeling method is a much better state-of-the-art labeling method in terms of classification accuracy and some other classification evaluation metrics. The results of the paper also proved that deep learning models such as LSTM and GRU are more suitable for dealing with the prediction of financial time series data. Full article
Show Figures

Figure 1

15 pages, 8173 KiB  
Article
Deep-Learning-Based Power Generation Forecasting of Thermal Energy Conversion
by Yu-Sin Lu and Kai-Yuan Lai
Entropy 2020, 22(10), 1161; https://doi.org/10.3390/e22101161 - 15 Oct 2020
Cited by 5 | Viewed by 2283
Abstract
ORC is a heat to power solution to convert low-grade thermal energy into electricity with relative low cost and adequate efficiency. The working of ORC relies on the liquid–vapor phase changes of certain organic fluid under different temperature and pressure. ORC is a [...] Read more.
ORC is a heat to power solution to convert low-grade thermal energy into electricity with relative low cost and adequate efficiency. The working of ORC relies on the liquid–vapor phase changes of certain organic fluid under different temperature and pressure. ORC is a well-established technology utilized in industry to recover industrial waste heat to electricity. However, the frequently varied temperature, pressure, and flow may raise difficulty to maintain a steady power generation from ORC. It is important to develop an effective prediction methodology for power generation in a stable grid system. This study proposes a methodology based on deep learning neural network to the predict power generation from ORC by 12 h in advance. The deep learning neural network is derived from long short-term memory network (LSTM), a type of recurrent neural network (RNN). A case study was conducted through analysis of ORC data from steel company. Different time series methodology including ARIMA and MLP were compared with LSTM in this study and shows the error rate decreased by 24% from LSTM. The proposed methodology can be used to effectively optimize the system warning threshold configuration for the early detection of abnormalities in power generators and a novel approach for early diagnosis in conventional industries. Full article
(This article belongs to the Special Issue Statistical Machine Learning for Multimodal Data Analysis)
Show Figures

Figure 1

16 pages, 4935 KiB  
Article
Perturbed and Unperturbed: Analyzing the Conservatively Perturbed Equilibrium (Linear Case)
by Yiming Xi, Xinquan Liu, Denis Constales and Gregory S. Yablonsky
Entropy 2020, 22(10), 1160; https://doi.org/10.3390/e22101160 - 15 Oct 2020
Cited by 5 | Viewed by 2255
Abstract
The “conservatively perturbed equilibrium” (CPE) technique for a complex chemical system is computationally analyzed in a batch reactor considering different linear mechanisms with three and four species. Contrary to traditional chemical relaxation procedures, in CPE experiments only some initial concentrations are modified; other [...] Read more.
The “conservatively perturbed equilibrium” (CPE) technique for a complex chemical system is computationally analyzed in a batch reactor considering different linear mechanisms with three and four species. Contrary to traditional chemical relaxation procedures, in CPE experiments only some initial concentrations are modified; other conditions, including the total amount of chemical elements and temperature are kept unchanged. Generally, for “unperturbed” species with initial concentrations equal to their corresponding equilibrium concentrations, unavoidable extreme values are observed during relaxation to the equilibrium. If the unperturbed species is involved in one step only, this extremum is a momentary equilibrium of the step; if the unperturbed species is involved in more reactions, the extremum is not a momentary equilibrium. The acyclic mechanism with four species may exhibit two extrema and an inflection point, which corresponds to an extremum of the rate of the species change. These facts provide essential information about the detailed mechanism of the complex reaction. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop