Next Issue
Volume 23, May
Previous Issue
Volume 23, March
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 23, Issue 4 (April 2021) – 112 articles

Cover Story (view full-size image): T cells recognize pathogen-derived structures (antigens) through randomly recombined surface T-cell receptors (TCRs). T-cell development comprises extensive proliferation and sequential steps of quality control based on the affinity of TCRs for self-antigens, interrogated by successive interaction with antigen-presenting cells, leading to substantial cell death and, thus, generating a pool of T cells with a broad, but not self-reactive, TCR repertoire. Here, we review experimental and mathematical strategies to infer the dynamic properties of T-cell development in the thymus across multiple scales: cell cycle, population dynamics and their regulations, and how physiological T-cell development emerges from cellular interactions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 2106 KiB  
Article
Subgraphs of Interest Social Networks for Diffusion Dynamics Prediction
by Valentina Y. Guleva, Polina O. Andreeva and Danila A. Vaganov
Entropy 2021, 23(4), 492; https://doi.org/10.3390/e23040492 - 20 Apr 2021
Viewed by 2010
Abstract
Finding the building blocks of real-world networks contributes to the understanding of their formation process and related dynamical processes, which is related to prediction and control tasks. We explore different types of social networks, demonstrating high structural variability, and aim to extract and [...] Read more.
Finding the building blocks of real-world networks contributes to the understanding of their formation process and related dynamical processes, which is related to prediction and control tasks. We explore different types of social networks, demonstrating high structural variability, and aim to extract and see their minimal building blocks, which are able to reproduce supergraph structural and dynamical properties, so as to be appropriate for diffusion prediction for the whole graph on the base of its small subgraph. For this purpose, we determine topological and functional formal criteria and explore sampling techniques. Using the method that provides the best correspondence to both criteria, we explore the building blocks of interest networks. The best sampling method allows one to extract subgraphs of optimal 30 nodes, which reproduce path lengths, clustering, and degree particularities of an initial graph. The extracted subgraphs are different for the considered interest networks, and provide interesting material for the global dynamics exploration on the mesoscale base. Full article
Show Figures

Figure 1

17 pages, 1401 KiB  
Article
A New Two-Stage Algorithm for Solving Optimization Problems
by Sajjad Amiri Doumari, Hadi Givi, Mohammad Dehghani, Zeinab Montazeri, Victor Leiva and Josep M. Guerrero
Entropy 2021, 23(4), 491; https://doi.org/10.3390/e23040491 - 20 Apr 2021
Cited by 31 | Viewed by 2947
Abstract
Optimization seeks to find inputs for an objective function that result in a maximum or minimum. Optimization methods are divided into exact and approximate (algorithms). Several optimization algorithms imitate natural phenomena, laws of physics, and behavior of living organisms. Optimization based on algorithms [...] Read more.
Optimization seeks to find inputs for an objective function that result in a maximum or minimum. Optimization methods are divided into exact and approximate (algorithms). Several optimization algorithms imitate natural phenomena, laws of physics, and behavior of living organisms. Optimization based on algorithms is the challenge that underlies machine learning, from logistic regression to training neural networks for artificial intelligence. In this paper, a new algorithm called two-stage optimization (TSO) is proposed. The TSO algorithm updates population members in two steps at each iteration. For this purpose, a group of good population members is selected and then two members of this group are randomly used to update the position of each of them. This update is based on the first selected good member at the first stage, and on the second selected good member at the second stage. We describe the stages of the TSO algorithm and model them mathematically. Performance of the TSO algorithm is evaluated for twenty-three standard objective functions. In order to compare the optimization results of the TSO algorithm, eight other competing algorithms are considered, including genetic, gravitational search, grey wolf, marine predators, particle swarm, teaching-learning-based, tunicate swarm, and whale approaches. The numerical results show that the new algorithm is superior and more competitive in solving optimization problems when compared with other algorithms. Full article
Show Figures

Figure 1

34 pages, 1413 KiB  
Article
Understanding the Variability in Graph Data Sets through Statistical Modeling on the Stiefel Manifold
by Clément Mantoux, Baptiste Couvy-Duchesne, Federica Cacciamani, Stéphane Epelbaum, Stanley Durrleman and Stéphanie Allassonnière
Entropy 2021, 23(4), 490; https://doi.org/10.3390/e23040490 - 20 Apr 2021
Cited by 2 | Viewed by 2754
Abstract
Network analysis provides a rich framework to model complex phenomena, such as human brain connectivity. It has proven efficient to understand their natural properties and design predictive models. In this paper, we study the variability within groups of networks, i.e., the structure of [...] Read more.
Network analysis provides a rich framework to model complex phenomena, such as human brain connectivity. It has proven efficient to understand their natural properties and design predictive models. In this paper, we study the variability within groups of networks, i.e., the structure of connection similarities and differences across a set of networks. We propose a statistical framework to model these variations based on manifold-valued latent factors. Each network adjacency matrix is decomposed as a weighted sum of matrix patterns with rank one. Each pattern is described as a random perturbation of a dictionary element. As a hierarchical statistical model, it enables the analysis of heterogeneous populations of adjacency matrices using mixtures. Our framework can also be used to infer the weight of missing edges. We estimate the parameters of the model using an Expectation-Maximization-based algorithm. Experimenting on synthetic data, we show that the algorithm is able to accurately estimate the latent structure in both low and high dimensions. We apply our model on a large data set of functional brain connectivity matrices from the UK Biobank. Our results suggest that the proposed model accurately describes the complex variability in the data set with a small number of degrees of freedom. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

20 pages, 6542 KiB  
Article
Numerical Simulation Study on Flow Laws and Heat Transfer of Gas Hydrate in the Spiral Flow Pipeline with Long Twisted Band
by Yongchao Rao, Lijun Li, Shuli Wang, Shuhua Zhao and Shidong Zhou
Entropy 2021, 23(4), 489; https://doi.org/10.3390/e23040489 - 20 Apr 2021
Cited by 3 | Viewed by 1734
Abstract
The natural gas hydrate plugging problems in the mixed pipeline are becoming more and more serious. The hydrate plugging has gradually become an important problem to ensure the safety of pipeline operation. The deposition and heat transfer characteristics of natural gas hydrate particles [...] Read more.
The natural gas hydrate plugging problems in the mixed pipeline are becoming more and more serious. The hydrate plugging has gradually become an important problem to ensure the safety of pipeline operation. The deposition and heat transfer characteristics of natural gas hydrate particles in the spiral flow pipeline have been studied. The DPM model (discrete phase model) was used to simulate the motion of solid particles, which was used to simulate the complex spiral flow characteristics of hydrate in the pipeline with a long twisted band. The deposition and heat transfer characteristics of gas hydrate particles in the spiral flow pipeline were studied. The velocity distribution, pressure drop distribution, heat transfer characteristics, and particle settling characteristics in the pipeline were investigated. The numerical results showed that compared with the straight flow without a long twisted band, two obvious eddies are formed in the flow field with a long twisted band, and the velocities are maximum at the center of the vortices. Along the direction of the pipeline, the two vortices move toward the pipe wall from near the twisted band, which can effectively carry the hydrate particles deposited on the wall. With the same Reynolds number, the twisted rate was greater, the spiral strength was weaker, the tangential velocity was smaller, and the pressure drop was smaller. Therefore, the pressure loss can be reduced as much as possible with effect of the spiral flow. In a straight light flow, the Nusselt number is in a parabolic shape with the opening downwards. At the center of the pipe, the Nusselt number gradually decreased toward the pipe wall at the maximum, and at the near wall, the attenuation gradient of the Nu number was large. For spiral flow, the curve presented by the Nusselt number was a trough at the center of the pipe and a peak at 1/2 of the pipe diameter. With the reduction of twist rate, the Nusselt number becomes larger. Therefore, the spiral flow can make the temperature distribution more even and prevent the large temperature difference, resulting in the mass formation of hydrate particles in the pipeline wall. Spiral flow has a good carrying effect. Under the same condition, the spiral flow carried hydrate particles at a distance about 3–4 times farther than that of the straight flow. Full article
Show Figures

Figure 1

15 pages, 946 KiB  
Article
Study of Dependence of Kinetic Freezeout Temperature on the Production Cross-Section of Particles in Various Centrality Intervals in Au–Au and Pb–Pb Collisions at High Energies
by Muhammad Waqas and Guang-Xiong Peng
Entropy 2021, 23(4), 488; https://doi.org/10.3390/e23040488 - 20 Apr 2021
Cited by 6 | Viewed by 1911
Abstract
Transverse momentum spectra of π+, p, Λ, Ξ or Ξ¯+, Ω or Ω¯+ and deuteron (d) in different centrality intervals in nucleus–nucleus collisions at the center of mass energy are analyzed by [...] Read more.
Transverse momentum spectra of π+, p, Λ, Ξ or Ξ¯+, Ω or Ω¯+ and deuteron (d) in different centrality intervals in nucleus–nucleus collisions at the center of mass energy are analyzed by the blast wave model with Boltzmann Gibbs statistics. We extracted the kinetic freezeout temperature, transverse flow velocity and kinetic freezeout volume from the transverse momentum spectra of the particles. It is observed that the non-strange and strange (multi-strange) particles freezeout separately due to different reaction cross-sections. While the freezeout volume and transverse flow velocity are mass dependent, they decrease with the resting mass of the particles. The present work reveals the scenario of a double kinetic freezeout in nucleus–nucleus collisions. Furthermore, the kinetic freezeout temperature and freezeout volume are larger in central collisions than peripheral collisions. However, the transverse flow velocity remains almost unchanged from central to peripheral collisions. Full article
Show Figures

Figure 1

18 pages, 9820 KiB  
Article
Research on Dynamic Evolution Model and Method of Communication Network Based on Real War Game
by Tongliang Lu, Kai Chen, Yan Zhang and Qiling Deng
Entropy 2021, 23(4), 487; https://doi.org/10.3390/e23040487 - 20 Apr 2021
Cited by 6 | Viewed by 2106
Abstract
Based on the data in real combat games, the combat System-of-Systems is usually composed of a large number of armed equipment platforms (or systems) and a reasonable communication network to connect mutually independent weapons and equipment platforms to achieve tasks such as information [...] Read more.
Based on the data in real combat games, the combat System-of-Systems is usually composed of a large number of armed equipment platforms (or systems) and a reasonable communication network to connect mutually independent weapons and equipment platforms to achieve tasks such as information collection, sharing, and collaborative processing. However, the generation algorithm of the combat system in the existing research is too simple and not suitable for reality. To overcome this problem, this paper proposes a communication network generation algorithm by adopting the joint distribution strategy of power law distribution and Poisson distribution to model the communication network. The simulation method is used to study the operation under continuous attack on communication nodes. The comprehensive experimental results of the dynamic evolution of the combat network in the battle scene verify the rationality and effectiveness of the communication network construction. Full article
Show Figures

Figure 1

18 pages, 9188 KiB  
Article
Porcelain Insulator Crack Location and Surface States Pattern Recognition Based on Hyperspectral Technology
by Yiming Zhao, Jing Yan, Yanxin Wang, Qianzhen Jing and Tingliang Liu
Entropy 2021, 23(4), 486; https://doi.org/10.3390/e23040486 - 20 Apr 2021
Cited by 7 | Viewed by 2771
Abstract
A porcelain insulator is an important part to ensure that the insulation requirements of power equipment can be met. Under the influence of their structure, porcelain insulators are prone to mechanical damage and cracks, which will reduce their insulation performance. After a long-term [...] Read more.
A porcelain insulator is an important part to ensure that the insulation requirements of power equipment can be met. Under the influence of their structure, porcelain insulators are prone to mechanical damage and cracks, which will reduce their insulation performance. After a long-term operation, crack expansion will eventually lead to breakdown and safety hazards. Therefore, it is of great significance to detect insulator cracks to ensure the safe and reliable operation of a power grid. However, most traditional methods of insulator crack detection involve offline detection or contact measurement, which is not conducive to the online monitoring of equipment. Hyperspectral imaging technology is a noncontact detection technology containing three-dimensional (3D) spatial spectral information, whereby the data provide more information and the measuring method has a higher safety than electric detection methods. Therefore, a model of positioning and state classification of porcelain insulators based on hyperspectral technology is proposed. In this model, image data were used to extract edges to locate cracks, and spectral information was used to classify the surface states of porcelain insulators with EfficientNet. Lastly, crack extraction was realized, and the recognition accuracy of cracks and normal states was 96.9%. Through an analysis of the results, it is proven that the crack detection method of a porcelain insulator based on hyperspectral technology is an effective non-contact online monitoring approach, which has broad application prospects in the era of the Internet of Things with the rapid development of electric power. Full article
(This article belongs to the Special Issue Reliability of Modern Electro-Mechanical Systems)
Show Figures

Figure 1

23 pages, 602 KiB  
Article
Knowledge Discovery for Higher Education Student Retention Based on Data Mining: Machine Learning Algorithms and Case Study in Chile
by Carlos A. Palacios, José A. Reyes-Suárez, Lorena A. Bearzotti, Víctor Leiva and Carolina Marchant
Entropy 2021, 23(4), 485; https://doi.org/10.3390/e23040485 - 20 Apr 2021
Cited by 50 | Viewed by 6949
Abstract
Data mining is employed to extract useful information and to detect patterns from often large data sets, closely related to knowledge discovery in databases and data science. In this investigation, we formulate models based on machine learning algorithms to extract relevant information predicting [...] Read more.
Data mining is employed to extract useful information and to detect patterns from often large data sets, closely related to knowledge discovery in databases and data science. In this investigation, we formulate models based on machine learning algorithms to extract relevant information predicting student retention at various levels, using higher education data and specifying the relevant variables involved in the modeling. Then, we utilize this information to help the process of knowledge discovery. We predict student retention at each of three levels during their first, second, and third years of study, obtaining models with an accuracy that exceeds 80% in all scenarios. These models allow us to adequately predict the level when dropout occurs. Among the machine learning algorithms used in this work are: decision trees, k-nearest neighbors, logistic regression, naive Bayes, random forest, and support vector machines, of which the random forest technique performs the best. We detect that secondary educational score and the community poverty index are important predictive variables, which have not been previously reported in educational studies of this type. The dropout assessment at various levels reported here is valid for higher education institutions around the world with similar conditions to the Chilean case, where dropout rates affect the efficiency of such institutions. Having the ability to predict dropout based on student’s data enables these institutions to take preventative measures, avoiding the dropouts. In the case study, balancing the majority and minority classes improves the performance of the algorithms. Full article
Show Figures

Figure 1

33 pages, 12047 KiB  
Article
A Volatility Estimator of Stock Market Indices Based on the Intrinsic Entropy Model
by Claudiu Vințe, Marcel Ausloos and Titus Felix Furtună
Entropy 2021, 23(4), 484; https://doi.org/10.3390/e23040484 - 19 Apr 2021
Cited by 6 | Viewed by 5031
Abstract
Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute [...] Read more.
Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute for estimating the volatility of stock market indices. Diverging from the widely used volatility models that take into account only the elements related to the traded prices, namely the open, high, low, and close prices of a trading day (OHLC), the intrinsic entropy model takes into account the traded volumes during the considered time frame as well. We adjust the intraday intrinsic entropy model that we introduced earlier for exchange-traded securities in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropic probability or market credence assigned to the corresponding price level. The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Nikkei 225, and Hang Seng Index). We compare the results produced by the intrinsic entropy model with the volatility estimates obtained for the same data sets using widely employed industry volatility estimators. The intrinsic entropy model proves to consistently deliver reliable estimates for various time frames while showing peculiarly high values for the coefficient of variation, with the estimates falling in a significantly lower interval range compared with those provided by the other advanced volatility estimators. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

22 pages, 985 KiB  
Article
Resultant Information Descriptors, Equilibrium States and Ensemble Entropy
by Roman F. Nalewajski
Entropy 2021, 23(4), 483; https://doi.org/10.3390/e23040483 - 19 Apr 2021
Cited by 4 | Viewed by 1593
Abstract
In this article, sources of information in electronic states are reexamined and a need for the resultant measures of the entropy/information content, combining contributions due to probability and phase/current densities, is emphasized. Probability distribution reflects the wavefunction modulus and generates classical contributions to [...] Read more.
In this article, sources of information in electronic states are reexamined and a need for the resultant measures of the entropy/information content, combining contributions due to probability and phase/current densities, is emphasized. Probability distribution reflects the wavefunction modulus and generates classical contributions to Shannon’s global entropy and Fisher’s gradient information. The phase component of molecular states similarly determines their nonclassical supplements, due to probability “convection”. The local-energy concept is used to examine the phase equalization in the equilibrium, phase-transformed states. Continuity relations for the wavefunction modulus and phase components are reexamined, the convectional character of the local source of the resultant gradient information is stressed, and latent probability currents in the equilibrium (stationary) quantum states are related to the horizontal (“thermodynamic”) phase. The equivalence of the energy and resultant gradient information (kinetic energy) descriptors of chemical processes is stressed. In the grand-ensemble description, the reactivity criteria are defined by the populational derivatives of the system average electronic energy. Their entropic analogs, given by the associated derivatives of the overall gradient information, are shown to provide an equivalent set of reactivity indices for describing the charge transfer phenomena. Full article
(This article belongs to the Special Issue Entropic and Complexity Measures in Atomic and Molecular Systems)
Show Figures

Figure 1

3 pages, 184 KiB  
Editorial
Information Theory in Molecular Evolution: From Models to Structures and Dynamics
by Faruck Morcos
Entropy 2021, 23(4), 482; https://doi.org/10.3390/e23040482 - 19 Apr 2021
Cited by 1 | Viewed by 1826
Abstract
Historically, information theory has been closely interconnected with evolutionary theory [...] Full article
18 pages, 1965 KiB  
Article
Performance of Portfolios Based on the Expected Utility-Entropy Fund Rating Approach
by Daniel Chiew, Judy Qiu, Sirimon Treepongkaruna, Jiping Yang and Chenxiao Shi
Entropy 2021, 23(4), 481; https://doi.org/10.3390/e23040481 - 18 Apr 2021
Viewed by 1846
Abstract
Yang and Qiu proposed and reframed an expected utility–entropy (EU-E) based decision model. Later on, a similar numerical representation for a risky choice was axiomatically developed by Luce et al. under the condition of segregation. Recently, we established a fund rating approach based [...] Read more.
Yang and Qiu proposed and reframed an expected utility–entropy (EU-E) based decision model. Later on, a similar numerical representation for a risky choice was axiomatically developed by Luce et al. under the condition of segregation. Recently, we established a fund rating approach based on the EU-E decision model and Morningstar ratings. In this paper, we apply the approach to US mutual funds and construct portfolios using the best rating funds. Furthermore, we evaluate the performance of the fund ratings based on the EU-E decision model against Morningstar ratings by examining the performance of the three models in portfolio selection. The conclusions show that portfolios constructed using the ratings based on the EU-E models with moderate tradeoff coefficients perform better than those constructed using Morningstar. The conclusion is robust to different rebalancing intervals. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making)
Show Figures

Figure 1

13 pages, 3015 KiB  
Article
Novel Features for Binary Time Series Based on Branch Length Similarity Entropy
by Sang-Hee Lee and Cheol-Min Park
Entropy 2021, 23(4), 480; https://doi.org/10.3390/e23040480 - 18 Apr 2021
Cited by 2 | Viewed by 2418
Abstract
Branch length similarity (BLS) entropy is defined in a network consisting of a single node and branches. In this study, we mapped the binary time-series signal to the circumference of the time circle so that the BLS entropy can be calculated for the [...] Read more.
Branch length similarity (BLS) entropy is defined in a network consisting of a single node and branches. In this study, we mapped the binary time-series signal to the circumference of the time circle so that the BLS entropy can be calculated for the binary time-series. We obtained the BLS entropy values for “1” signals on the time circle. The set of values are the BLS entropy profile. We selected the local maximum (minimum) point, slope, and inflection point of the entropy profile as the characteristic features of the binary time-series and investigated and explored their significance. The local maximum (minimum) point indicates the time at which the rate of change in the signal density becomes zero. The slope and inflection points correspond to the degree of change in the signal density and the time at which the signal density changes occur, respectively. Moreover, we show that the characteristic features can be widely used in binary time-series analysis by characterizing the movement trajectory of Caenorhabditis elegans. We also mention the problems that need to be explored mathematically in relation to the features and propose candidates for additional features based on the BLS entropy profile. Full article
Show Figures

Figure 1

20 pages, 879 KiB  
Article
Entanglement and Non-Locality in Quantum Protocols with Identical Particles
by Fabio Benatti, Roberto Floreanini and Ugo Marzolino
Entropy 2021, 23(4), 479; https://doi.org/10.3390/e23040479 - 18 Apr 2021
Cited by 5 | Viewed by 2783
Abstract
We study the role of entanglement and non-locality in quantum protocols that make use of systems of identical particles. Unlike in the case of distinguishable particles, the notions of entanglement and non-locality for systems whose constituents cannot be distinguished and singly addressed are [...] Read more.
We study the role of entanglement and non-locality in quantum protocols that make use of systems of identical particles. Unlike in the case of distinguishable particles, the notions of entanglement and non-locality for systems whose constituents cannot be distinguished and singly addressed are still debated. We clarify why the only approach that avoids incongruities and paradoxes is the one based on the second quantization formalism, whereby it is the entanglement of the modes that can be populated by the particles that really matters and not the particles themselves. Indeed, by means of a metrological and of a teleportation protocol, we show that inconsistencies arise in formulations that force entanglement and non-locality to be properties of the identical particles rather than of the modes they can occupy. The reason resides in the fact that orthogonal modes can always be addressed while identical particles cannot. Full article
(This article belongs to the Special Issue Quantum Information and Quantum Optics)
23 pages, 650 KiB  
Article
Excitation Functions of Tsallis-Like Parameters in High-Energy Nucleus–Nucleus Collisions
by Li-Li Li, Fu-Hu Liu and Khusniddin K. Olimov
Entropy 2021, 23(4), 478; https://doi.org/10.3390/e23040478 - 18 Apr 2021
Cited by 17 | Viewed by 2214
Abstract
The transverse momentum spectra of charged pions, kaons, and protons produced at mid-rapidity in central nucleus–nucleus (AA) collisions at high energies are analyzed by considering particles to be created from two participant partons, which are assumed to be contributors from the collision system. [...] Read more.
The transverse momentum spectra of charged pions, kaons, and protons produced at mid-rapidity in central nucleus–nucleus (AA) collisions at high energies are analyzed by considering particles to be created from two participant partons, which are assumed to be contributors from the collision system. Each participant (contributor) parton is assumed to contribute to the transverse momentum by a Tsallis-like function. The contributions of the two participant partons are regarded as the two components of transverse momentum of the identified particle. The experimental data measured in high-energy AA collisions by international collaborations are studied. The excitation functions of kinetic freeze-out temperature and transverse flow velocity are extracted. The two parameters increase quickly from ≈3 to ≈10 GeV (exactly from 2.7 to 7.7 GeV) and then slowly at above 10 GeV with the increase of collision energy. In particular, there is a plateau from near 10 GeV to 200 GeV in the excitation function of kinetic freeze-out temperature. Full article
Show Figures

Figure 1

14 pages, 1292 KiB  
Article
Hybrid Basketball Game Outcome Prediction Model by Integrating Data Mining Methods for the National Basketball Association
by Wei-Jen Chen, Mao-Jhen Jhou, Tian-Shyug Lee and Chi-Jie Lu
Entropy 2021, 23(4), 477; https://doi.org/10.3390/e23040477 - 17 Apr 2021
Cited by 16 | Viewed by 5351
Abstract
The sports market has grown rapidly over the last several decades. Sports outcomes prediction is an attractive sports analytic challenge as it provides useful information for operations in the sports market. In this study, a hybrid basketball game outcomes prediction scheme is developed [...] Read more.
The sports market has grown rapidly over the last several decades. Sports outcomes prediction is an attractive sports analytic challenge as it provides useful information for operations in the sports market. In this study, a hybrid basketball game outcomes prediction scheme is developed for predicting the final score of the National Basketball Association (NBA) games by integrating five data mining techniques, including extreme learning machine, multivariate adaptive regression splines, k-nearest neighbors, eXtreme gradient boosting (XGBoost), and stochastic gradient boosting. Designed features are generated by merging different game-lags information from fundamental basketball statistics and used in the proposed scheme. This study collected data from all the games of the NBA 2018–2019 seasons. There are 30 teams in the NBA and each team play 82 games per season. A total of 2460 NBA game data points were collected. Empirical results illustrated that the proposed hybrid basketball game prediction scheme achieves high prediction performance and identifies suitable game-lag information and relevant game features (statistics). Our findings suggested that a two-stage XGBoost model using four pieces of game-lags information achieves the best prediction performance among all competing models. The six designed features, including averaged defensive rebounds, averaged two-point field goal percentage, averaged free throw percentage, averaged offensive rebounds, averaged assists, and averaged three-point field goal attempts, from four game-lags have a greater effect on the prediction of final scores of NBA games than other game-lags. The findings of this study provide relevant insights and guidance for other team or individual sports outcomes prediction research. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics II)
Show Figures

Figure 1

16 pages, 7568 KiB  
Article
Refined Composite Multi-Scale Reverse Weighted Permutation Entropy and Its Applications in Ship-Radiated Noise
by Yuxing Li, Bo Geng and Shangbin Jiao
Entropy 2021, 23(4), 476; https://doi.org/10.3390/e23040476 - 17 Apr 2021
Cited by 10 | Viewed by 2081
Abstract
Ship-radiated noise is one of the important signal types under the complex ocean background, which can well reflect physical properties of ships. As one of the valid measures to characterize the complexity of ship-radiated noise, permutation entropy (PE) has the advantages of high [...] Read more.
Ship-radiated noise is one of the important signal types under the complex ocean background, which can well reflect physical properties of ships. As one of the valid measures to characterize the complexity of ship-radiated noise, permutation entropy (PE) has the advantages of high efficiency and simple calculation. However, PE has the problems of missing amplitude information and single scale. To address the two drawbacks, refined composite multi-scale reverse weighted PE (RCMRWPE), as a novel measurement technology of describing the signal complexity, is put forward based on refined composite multi-scale processing (RCMP) and reverse weighted PE (RWPE). RCMP is an improved method of coarse-graining, which not only solves the problem of single scale, but also improves the stability of traditional coarse-graining; RWPE has been proposed more recently, and has better inter-class separability and robustness performance to noise than PE, weighted PE (WPE), and reverse PE (RPE). Additionally, a feature extraction scheme of ship-radiated noise is proposed based on RCMRWPE, furthermore, RCMRWPE is combined with discriminant analysis classifier (DAC) to form a new classification method. After that, a large number of comparative experiments of feature extraction schemes and classification methods with two artificial random signals and six ship-radiated noise are carried out, which show that the proposed feature extraction scheme has better performance in distinguishing ability and stability than the other three similar feature extraction schemes based on multi-scale PE (MPE), multi-scale WPE (MWPE), and multi-scale RPE (MRPE), and the proposed classification method also has the highest recognition rate. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications II)
Show Figures

Figure 1

23 pages, 1239 KiB  
Article
Extended Lattice Boltzmann Model
by Mohammad Hossein Saadat, Benedikt Dorschner and Ilya Karlin
Entropy 2021, 23(4), 475; https://doi.org/10.3390/e23040475 - 17 Apr 2021
Cited by 9 | Viewed by 2368
Abstract
Conventional lattice Boltzmann models for the simulation of fluid dynamics are restricted by an error in the stress tensor that is negligible only for small flow velocity and at a singular value of the temperature. To that end, we propose a unified formulation [...] Read more.
Conventional lattice Boltzmann models for the simulation of fluid dynamics are restricted by an error in the stress tensor that is negligible only for small flow velocity and at a singular value of the temperature. To that end, we propose a unified formulation that restores Galilean invariance and the isotropy of the stress tensor by introducing an extended equilibrium. This modification extends lattice Boltzmann models to simulations with higher values of the flow velocity and can be used at temperatures that are higher than the lattice reference temperature, which enhances computational efficiency by decreasing the number of required time steps. Furthermore, the extended model also remains valid for stretched lattices, which are useful when flow gradients are predominant in one direction. The model is validated by simulations of two- and three-dimensional benchmark problems, including the double shear layer flow, the decay of homogeneous isotropic turbulence, the laminar boundary layer over a flat plate and the turbulent channel flow. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

17 pages, 3153 KiB  
Article
Low-Frequency Seismic Noise Properties in the Japanese Islands
by Alexey Lyubushin
Entropy 2021, 23(4), 474; https://doi.org/10.3390/e23040474 - 16 Apr 2021
Cited by 9 | Viewed by 4239
Abstract
The records of seismic noise in Japan for the period of 1997–2020, which includes the Tohoku seismic catastrophe on 11 March 2011, are considered. The following properties of noise are analyzed: The wavelet-based Donoho–Johnston index, the singularity spectrum support width, and the entropy [...] Read more.
The records of seismic noise in Japan for the period of 1997–2020, which includes the Tohoku seismic catastrophe on 11 March 2011, are considered. The following properties of noise are analyzed: The wavelet-based Donoho–Johnston index, the singularity spectrum support width, and the entropy of the wavelet coefficients. The question of whether precursors of strong earthquakes can be formulated on their basis is investigated. Attention is paid to the time interval after the Tohoku mega-earthquake to the trends in the mean properties of low-frequency seismic noise, which reflect the constant simplification of the statistical structure of seismic vibrations. Estimates of two-dimensional probability densities of extreme values are presented, which highlight the places in which extreme values of seismic noise properties are most often realized. The estimates of the probability densities of extreme values coincide with each other and have a maximum in the region: 30° N  Lat  34° N, 136° E  Lon 140° E. The main conclusions of the conducted studies are that the preparation of a strong earthquake is accompanied by a simplification of the structure of seismic noise. It is shown that bursts of coherence between the time series of the day length and the noise properties within annual time window precede bursts of released seismic energy. The value of the lag in the release of seismic energy relative to bursts of coherence is about 1.5 years, which can be used to declare a time interval of high seismic hazard after reaching the peak of coherence. Full article
(This article belongs to the Special Issue Complex Systems Time Series Analysis and Modeling for Geoscience)
Show Figures

Figure 1

24 pages, 8988 KiB  
Article
Estimation of Feeding Composition of Industrial Process Based on Data Reconciliation
by Yusi Luan, Mengxuan Jiang, Zhenxiang Feng and Bei Sun
Entropy 2021, 23(4), 473; https://doi.org/10.3390/e23040473 - 16 Apr 2021
Cited by 5 | Viewed by 1768
Abstract
For an industrial process, the estimation of feeding composition is important for analyzing production status and making control decisions. However, random errors or even gross ones inevitably contaminate the actual measurements. Feeding composition is conventionally obtained via discrete and low-rate artificial testing. To [...] Read more.
For an industrial process, the estimation of feeding composition is important for analyzing production status and making control decisions. However, random errors or even gross ones inevitably contaminate the actual measurements. Feeding composition is conventionally obtained via discrete and low-rate artificial testing. To address these problems, a feeding composition estimation approach based on data reconciliation procedure is developed. To improve the variable accuracy, a novel robust M-estimator is first proposed. Then, an iterative robust hierarchical data reconciliation and estimation strategy is applied to estimate the feeding composition. The feasibility and effectiveness of the estimation approach are verified on a fluidized bed roaster. The proposed M-estimator showed better overall performance. Full article
(This article belongs to the Special Issue Complex Dynamic System Modelling, Identification and Control)
Show Figures

Figure 1

11 pages, 2171 KiB  
Article
Empirical Mode Decomposition-Derived Entropy Features Are Beneficial to Distinguish Elderly People with a Falling History on a Force Plate Signal
by Li-Wei Chou, Kang-Ming Chang, Yi-Chun Wei and Mei-Kuei Lu
Entropy 2021, 23(4), 472; https://doi.org/10.3390/e23040472 - 16 Apr 2021
Cited by 7 | Viewed by 2110
Abstract
Fall risk prediction is an important issue for the elderly. A center of pressure signal, derived from a force plate, is useful for the estimation of body calibration. However, it is still difficult to distinguish elderly people’s fall history by using a force [...] Read more.
Fall risk prediction is an important issue for the elderly. A center of pressure signal, derived from a force plate, is useful for the estimation of body calibration. However, it is still difficult to distinguish elderly people’s fall history by using a force plate signal. In this study, older adults with and without a history of falls were recruited to stand still for 60 s on a force plate. Forces in the x, y and z directions (Fx, Fy, and Fz) and center of pressure in the anteroposterior (COPx) and mediolateral directions (COPy) were derived. There were 49 subjects in the non-fall group, with an average age of 71.67 (standard derivation: 6.56). There were also 27 subjects in the fall group, with an average age of 70.66 (standard derivation: 6.38). Five signal series—forces in x, y, z (Fx, Fy, Fz), COPX, and COPy directions—were used. These five signals were further decomposed with empirical mode decomposition (EMD) with seven intrinsic mode functions. Time domain features (mean, standard derivation and coefficient of variations) and entropy features (approximate entropy and sample entropy) of the original signals and EMD-derived signals were extracted. Results showed that features extracted from the raw COP data did not differ significantly between the fall and non-fall groups. There were 10 features extracted using EMD, with significant differences observed among fall and non-fall groups. These included four features from COPx and two features from COPy, Fx and Fz. Full article
(This article belongs to the Special Issue Entropy in Biomedical Applications)
Show Figures

Figure 1

18 pages, 786 KiB  
Article
Effect of Inter-System Coupling on Heat Transport in a Microscopic Collision Model
by Feng Tian, Jian Zou, Lei Li, Hai Li and Bin Shao
Entropy 2021, 23(4), 471; https://doi.org/10.3390/e23040471 - 16 Apr 2021
Cited by 5 | Viewed by 1913
Abstract
In this paper we consider a bipartite system composed of two subsystems each coupled to its own thermal environment. Based on a collision model, we mainly study whether the approximation (i.e., the inter-system coupling is ignored when modeling the system–environment interaction) is valid [...] Read more.
In this paper we consider a bipartite system composed of two subsystems each coupled to its own thermal environment. Based on a collision model, we mainly study whether the approximation (i.e., the inter-system coupling is ignored when modeling the system–environment interaction) is valid or not. We also address the problem of heat transport unitedly for both excitation-conserving system–environment interactions and non-excitation-conserving system–environment interactions. For the former interaction, as the inter-system interaction strength increases, at first this approximation gets worse as expected, but then counter-intuitively gets better even for a stronger inter-system coupling. For the latter interaction with asymmetry, this approximation gets progressively worse. In this case we realize a perfect thermal rectification, and we cannot find an apparent rectification effect for the former interaction. Finally and more importantly, our results show that whether this approximation is valid or not is closely related to the quantum correlations between the subsystems, i.e., the weaker the quantum correlations, the more justified the approximation and vice versa. Full article
Show Figures

Figure 1

12 pages, 1478 KiB  
Article
Applying the Horizontal Visibility Graph Method to Study Irreversibility of Electromagnetic Turbulence in Non-Thermal Plasmas
by Belén Acosta-Tripailao, Denisse Pastén and Pablo S. Moya
Entropy 2021, 23(4), 470; https://doi.org/10.3390/e23040470 - 16 Apr 2021
Cited by 14 | Viewed by 2545
Abstract
One of the fundamental open questions in plasma physics is the role of non-thermal particles distributions in poorly collisional plasma environments, a system that is commonly found throughout the Universe, e.g., the solar wind and the Earth’s magnetosphere correspond to natural plasma physics [...] Read more.
One of the fundamental open questions in plasma physics is the role of non-thermal particles distributions in poorly collisional plasma environments, a system that is commonly found throughout the Universe, e.g., the solar wind and the Earth’s magnetosphere correspond to natural plasma physics laboratories in which turbulent phenomena can be studied. Our study perspective is born from the method of Horizontal Visibility Graph (HVG) that has been developed in the last years to analyze time series avoiding the tedium and the high computational cost that other methods offer. Here, we build a complex network based on directed HVG technique applied to magnetic field fluctuations time series obtained from Particle In Cell (PIC) simulations of a magnetized collisionless plasma to distinguish the degree distributions and calculate the Kullback–Leibler Divergence (KLD) as a measure of relative entropy of data sets produced by processes that are not in equilibrium. First, we analyze the connectivity probability distribution for the undirected version of HVG finding how the Kappa distribution for low values of κ tends to be an uncorrelated time series, while the Maxwell–Boltzmann distribution shows a correlated stochastic processes behavior. Subsequently, we investigate the degree of temporary irreversibility of magnetic fluctuations that are self-generated by the plasma, comparing the case of a thermal plasma (described by a Maxwell–Botzmann velocity distribution function) with non-thermal Kappa distributions. We have shown that the KLD associated to the HVG is able to distinguish the level of reversibility that is associated to the thermal equilibrium in the plasma, because the dissipative degree of the system increases as the value of κ parameter decreases and the distribution function departs from the Maxwell–Boltzmann equilibrium. Full article
Show Figures

Figure 1

11 pages, 922 KiB  
Article
Comparison between Highly Complex Location Models and GAMLSS
by Thiago G. Ramires, Luiz R. Nakamura, Ana J. Righetto, Renan J. Carvalho, Lucas A. Vieira and Carlos A. B. Pereira
Entropy 2021, 23(4), 469; https://doi.org/10.3390/e23040469 - 16 Apr 2021
Cited by 5 | Viewed by 2516
Abstract
This paper presents a discussion regarding regression models, especially those belonging to the location class. Our main motivation is that, with simple distributions having simple interpretations, in some cases, one gets better results than the ones obtained with overly complex distributions. For instance, [...] Read more.
This paper presents a discussion regarding regression models, especially those belonging to the location class. Our main motivation is that, with simple distributions having simple interpretations, in some cases, one gets better results than the ones obtained with overly complex distributions. For instance, with the reverse Gumbel (RG) distribution, it is possible to explain response variables by making use of the generalized additive models for location, scale, and shape (GAMLSS) framework, which allows the fitting of several parameters (characteristics) of the probabilistic distributions, like mean, mode, variance, and others. Three real data applications are used to compare several location models against the RG under the GAMLSS framework. The intention is to show that the use of a simple distribution (e.g., RG) based on a more sophisticated regression structure may be preferable than using a more complex location model. Full article
Show Figures

Figure 1

10 pages, 876 KiB  
Article
The Quality of Statistical Reporting and Data Presentation in Predatory Dental Journals Was Lower Than in Non-Predatory Journals
by Pentti Nieminen and Sergio E. Uribe
Entropy 2021, 23(4), 468; https://doi.org/10.3390/e23040468 - 16 Apr 2021
Cited by 9 | Viewed by 3495
Abstract
Proper peer review and quality of published articles are often regarded as signs of reliable scientific journals. The aim of this study was to compare whether the quality of statistical reporting and data presentation differs among articles published in ‘predatory dental journals’ and [...] Read more.
Proper peer review and quality of published articles are often regarded as signs of reliable scientific journals. The aim of this study was to compare whether the quality of statistical reporting and data presentation differs among articles published in ‘predatory dental journals’ and in other dental journals. We evaluated 50 articles published in ‘predatory open access (OA) journals’ and 100 clinical trials published in legitimate dental journals between 2019 and 2020. The quality of statistical reporting and data presentation of each paper was assessed on a scale from 0 (poor) to 10 (high). The mean (SD) quality score of the statistical reporting and data presentation was 2.5 (1.4) for the predatory OA journals, 4.8 (1.8) for the legitimate OA journals, and 5.6 (1.8) for the more visible dental journals. The mean values differed significantly (p < 0.001). The quality of statistical reporting of clinical studies published in predatory journals was found to be lower than in open access and highly cited journals. This difference in quality is a wake-up call to consume study results critically. Poor statistical reporting indicates wider general lower quality in publications where the authors and journals are less likely to be critiqued by peer review. Full article
(This article belongs to the Special Issue Statistical Methods for Medicine and Health Sciences)
Show Figures

Figure 1

21 pages, 1153 KiB  
Article
Toward a Comparison of Classical and New Privacy Mechanism
by Daniel Heredia-Ductram, Miguel Nunez-del-Prado and Hugo Alatrista-Salas
Entropy 2021, 23(4), 467; https://doi.org/10.3390/e23040467 - 15 Apr 2021
Cited by 1 | Viewed by 2135
Abstract
In the last decades, the development of interconnectivity, pervasive systems, citizen sensors, and Big Data technologies allowed us to gather many data from different sources worldwide. This phenomenon has raised privacy concerns around the globe, compelling states to enforce data protection laws. In [...] Read more.
In the last decades, the development of interconnectivity, pervasive systems, citizen sensors, and Big Data technologies allowed us to gather many data from different sources worldwide. This phenomenon has raised privacy concerns around the globe, compelling states to enforce data protection laws. In parallel, privacy-enhancing techniques have emerged to meet regulation requirements allowing companies and researchers to exploit individual data in a privacy-aware way. Thus, data curators need to find the most suitable algorithms to meet a required trade-off between utility and privacy. This crucial task could take a lot of time since there is a lack of benchmarks on privacy techniques. To fill this gap, we compare classical approaches of privacy techniques like Statistical Disclosure Control and Differential Privacy techniques to more recent techniques such as Generative Adversarial Networks and Machine Learning Copies using an entire commercial database in the current effort. The obtained results allow us to show the evolution of privacy techniques and depict new uses of the privacy-aware Machine Learning techniques. Full article
(This article belongs to the Special Issue Machine Learning Ecosystems: Opportunities and Threats)
Show Figures

Figure 1

40 pages, 4144 KiB  
Article
“Exact” and Approximate Methods for Bayesian Inference: Stochastic Volatility Case Study
by Yuliya Shapovalova
Entropy 2021, 23(4), 466; https://doi.org/10.3390/e23040466 - 15 Apr 2021
Cited by 4 | Viewed by 3678
Abstract
We conduct a case study in which we empirically illustrate the performance of different classes of Bayesian inference methods to estimate stochastic volatility models. In particular, we consider how different particle filtering methods affect the variance of the estimated likelihood. We review and [...] Read more.
We conduct a case study in which we empirically illustrate the performance of different classes of Bayesian inference methods to estimate stochastic volatility models. In particular, we consider how different particle filtering methods affect the variance of the estimated likelihood. We review and compare particle Markov Chain Monte Carlo (MCMC), RMHMC, fixed-form variational Bayes, and integrated nested Laplace approximation to estimate the posterior distribution of the parameters. Additionally, we conduct the review from the point of view of whether these methods are (1) easily adaptable to different model specifications; (2) adaptable to higher dimensions of the model in a straightforward way; (3) feasible in the multivariate case. We show that when using the stochastic volatility model for methods comparison, various data-generating processes have to be considered to make a fair assessment of the methods. Finally, we present a challenging specification of the multivariate stochastic volatility model, which is rarely used to illustrate the methods but constitutes an important practical application. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

9 pages, 283 KiB  
Article
On the Locally Polynomial Complexity of the Projection-Gradient Method for Solving Piecewise Quadratic Optimisation Problems
by Agnieszka Prusińska, Krzysztof Szkatuła and Alexey Tret’yakov
Entropy 2021, 23(4), 465; https://doi.org/10.3390/e23040465 - 15 Apr 2021
Viewed by 1415
Abstract
This paper proposes a method for solving optimisation problems involving piecewise quadratic functions. The method provides a solution in a finite number of iterations, and the computational complexity of the proposed method is locally polynomial of the problem dimension, i.e., if the initial [...] Read more.
This paper proposes a method for solving optimisation problems involving piecewise quadratic functions. The method provides a solution in a finite number of iterations, and the computational complexity of the proposed method is locally polynomial of the problem dimension, i.e., if the initial point belongs to the sufficiently small neighbourhood of the solution set. Proposed method could be applied for solving large systems of linear inequalities. Full article
(This article belongs to the Section Complexity)
28 pages, 1106 KiB  
Article
On a Variational Definition for the Jensen-Shannon Symmetrization of Distances Based on the Information Radius
by Frank Nielsen
Entropy 2021, 23(4), 464; https://doi.org/10.3390/e23040464 - 14 Apr 2021
Cited by 17 | Viewed by 7878
Abstract
We generalize the Jensen-Shannon divergence and the Jensen-Shannon diversity index by considering a variational definition with respect to a generic mean, thereby extending the notion of Sibson’s information radius. The variational definition applies to any arbitrary distance and yields a new way to [...] Read more.
We generalize the Jensen-Shannon divergence and the Jensen-Shannon diversity index by considering a variational definition with respect to a generic mean, thereby extending the notion of Sibson’s information radius. The variational definition applies to any arbitrary distance and yields a new way to define a Jensen-Shannon symmetrization of distances. When the variational optimization is further constrained to belong to prescribed families of probability measures, we get relative Jensen-Shannon divergences and their equivalent Jensen-Shannon symmetrizations of distances that generalize the concept of information projections. Finally, we touch upon applications of these variational Jensen-Shannon divergences and diversity indices to clustering and quantization tasks of probability measures, including statistical mixtures. Full article
Show Figures

Graphical abstract

27 pages, 7136 KiB  
Article
Adaptive Diagnosis for Fault Tolerant Data Fusion Based on α-Rényi Divergence Strategy for Vehicle Localization
by Khoder Makkawi, Nourdine Ait-Tmazirte, Maan El Badaoui El Najjar and Nazih Moubayed
Entropy 2021, 23(4), 463; https://doi.org/10.3390/e23040463 - 14 Apr 2021
Cited by 9 | Viewed by 1921
Abstract
When applying a diagnostic technique to complex systems, whose dynamics, constraints, and environment evolve over time, being able to re-evaluate the residuals that are capable of detecting defaults and proposing the most appropriate ones can quickly prove to make sense. For this purpose, [...] Read more.
When applying a diagnostic technique to complex systems, whose dynamics, constraints, and environment evolve over time, being able to re-evaluate the residuals that are capable of detecting defaults and proposing the most appropriate ones can quickly prove to make sense. For this purpose, the concept of adaptive diagnosis is introduced. In this work, the contributions of information theory are investigated in order to propose a Fault-Tolerant multi-sensor data fusion framework. This work is part of studies proposing an architecture combining a stochastic filter for state estimation with a diagnostic layer with the aim of proposing a safe and accurate state estimation from potentially inconsistent or erroneous sensors measurements. From the design of the residuals, using α-Rényi Divergence (α-RD), to the optimization of the decision threshold, through the establishment of a function that is dedicated to the choice of α at each moment, we detail each step of the proposed automated decision-support framework. We also dwell on: (1) the consequences of the degree of freedom provided by this α parameter and on (2) the application-dictated policy to design the α tuning function playing on the overall performance of the system (detection rate, false alarms, and missed detection rates). Finally, we present a real application case on which this framework has been tested. The problem of multi-sensor localization, integrating sensors whose operating range is variable according to the environment crossed, is a case study to illustrate the contributions of such an approach and show the performance. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop