Next Issue
Volume 25, November
Previous Issue
Volume 25, September
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 25, Issue 10 (October 2023) – 110 articles

Cover Story (view full-size image): A Monte Carlo approach is proposed for implementing Lynden–Bell (LB) entropy maximization for systems with long range interactions. The direct maximization of LB entropy for an arbitrary initial particle distribution requires an infinite number of Lagrange multipliers, limiting its applicability. The present approach discretizes the initial particle distribution into density levels, which are then evolved to LB equilibrium using a Monte Carlo method. A comparison with Molecular Dynamics (MD) simulations reveals that initial distributions do not fully relax at the maximum of LB entropy. In particular, the stationary particle distributions obtained using MD simulations exhibit a hard cutoff, instead of a soft tail predicted by the LB theory. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 5240 KiB  
Article
Advanced Exergy-Based Analysis of an Organic Rankine Cycle (ORC) for Waste Heat Recovery
by Zineb Fergani and Tatiana Morosuk
Entropy 2023, 25(10), 1475; https://doi.org/10.3390/e25101475 - 23 Oct 2023
Viewed by 1146
Abstract
In this study, advanced exergy and exergoeconomic analysis are applied to an Organic Rankine Cycle (ORC) for waste heat recovery to identify the potential for thermodynamic and economic improvement of the system (splitting the decision variables into avoidable/unavoidable parts) and the interdependencies between [...] Read more.
In this study, advanced exergy and exergoeconomic analysis are applied to an Organic Rankine Cycle (ORC) for waste heat recovery to identify the potential for thermodynamic and economic improvement of the system (splitting the decision variables into avoidable/unavoidable parts) and the interdependencies between the components (endogenous and exogenous parts). For the first time, the advanced analysis has been applied under different conditions: constant heat rate supplied to the ORC or constant power generated by the ORC. The system simulation was performed in Matlab. The results show that the interactions among components of the ORC system are not strong; therefore, the approach of component-by-component optimization can be applied. The evaporator and condenser are important components to be improved from both thermodynamic and cost perspectives. The advanced exergoeconomic (graphical) optimization of these components indicates that the minimum temperature difference in the evaporator should be increased while the minimum temperature difference in the condenser should be decreased. The optimization results show that the exergetic efficiency of the ORC system can be improved from 27.1% to 27.7%, while the cost of generated electricity decreased from 18.14 USD/GJ to 18.09 USD/GJ. Full article
(This article belongs to the Special Issue Women’s Special Issue Series: Entropy)
Show Figures

Figure 1

13 pages, 1184 KiB  
Article
Gaussian and Lerch Models for Unimodal Time Series Forcasting
by Azzouz Dermoune, Daoud Ounaissi and Yousri Slaoui
Entropy 2023, 25(10), 1474; https://doi.org/10.3390/e25101474 - 22 Oct 2023
Viewed by 1012
Abstract
We consider unimodal time series forecasting. We propose Gaussian and Lerch models for this forecasting problem. The Gaussian model depends on three parameters and the Lerch model depends on four parameters. We estimate the unknown parameters by minimizing the sum of the absolute [...] Read more.
We consider unimodal time series forecasting. We propose Gaussian and Lerch models for this forecasting problem. The Gaussian model depends on three parameters and the Lerch model depends on four parameters. We estimate the unknown parameters by minimizing the sum of the absolute values of the residuals. We solve these minimizations with and without a weighted median and we compare both approaches. As a numerical application, we consider the daily infections of COVID-19 in China using the Gaussian and Lerch models. We derive a confident interval for the daily infections from each local minima. Full article
Show Figures

Figure 1

12 pages, 610 KiB  
Article
New Construction of Asynchronous Channel Hopping Sequences in Cognitive Radio Networks
by Yaoxuan Wang, Xianhua Niu, Chao Qi, Zhihang He and Bosen Zeng
Entropy 2023, 25(10), 1473; https://doi.org/10.3390/e25101473 - 22 Oct 2023
Viewed by 854
Abstract
The channel-hopping-based rendezvous is essential to alleviate the problem of under-utilization and scarcity of the spectrum in cognitive radio networks. It dynamically allows unlicensed secondary users to schedule rendezvous channels using the assigned hopping sequence to guarantee the self-organization property in a limited [...] Read more.
The channel-hopping-based rendezvous is essential to alleviate the problem of under-utilization and scarcity of the spectrum in cognitive radio networks. It dynamically allows unlicensed secondary users to schedule rendezvous channels using the assigned hopping sequence to guarantee the self-organization property in a limited time. In this paper, we use the interleaving technique to cleverly construct a set of asynchronous channel-hopping sequences consisting of d sequences of period xN2 with flexible parameters, which can generate sequences of different lengths. By this advantage, the new designed CHSs can be used to adapt to the demands of various communication scenarios. Furthermore, we focus on the improved maximum-time-to-rendezvous and maximum-first-time-to-rendezvous performance of the new construction compared to the prior research at the same sequence length. The new channel-hopping sequences ensure that rendezvous occurs between any two sequences and the rendezvous times are random and unpredictable when using licensed channels under asynchronous access, although the full degree-of-rendezvous is not satisfied. Our simulation results show that the new construction is more balanced and unpredictable between the maximum-time-to-rendezvous and the mean and variance of time-to-rendezvous. Full article
(This article belongs to the Special Issue Coding and Entropy)
Show Figures

Figure 1

16 pages, 1390 KiB  
Article
Convolutional Models with Multi-Feature Fusion for Effective Link Prediction in Knowledge Graph Embedding
by Qinglang Guo, Yong Liao, Zhe Li, Hui Lin and Shenglin Liang
Entropy 2023, 25(10), 1472; https://doi.org/10.3390/e25101472 - 21 Oct 2023
Viewed by 953
Abstract
Link prediction remains paramount in knowledge graph embedding (KGE), aiming to discern obscured or non-manifest relationships within a given knowledge graph (KG). Despite the critical nature of this endeavor, contemporary methodologies grapple with notable constraints, predominantly in terms of computational overhead and the [...] Read more.
Link prediction remains paramount in knowledge graph embedding (KGE), aiming to discern obscured or non-manifest relationships within a given knowledge graph (KG). Despite the critical nature of this endeavor, contemporary methodologies grapple with notable constraints, predominantly in terms of computational overhead and the intricacy of encapsulating multifaceted relationships. This paper introduces a sophisticated approach that amalgamates convolutional operators with pertinent graph structural information. By meticulously integrating information pertinent to entities and their immediate relational neighbors, we enhance the performance of the convolutional model, culminating in an averaged embedding ensuing from the convolution across entities and their proximal nodes. Significantly, our methodology presents a distinctive avenue, facilitating the inclusion of edge-specific data into the convolutional model’s input, thus endowing users with the latitude to calibrate the model’s architecture and parameters congruent with their specific dataset. Empirical evaluations underscore the ascendancy of our proposition over extant convolution-based link prediction benchmarks, particularly evident across the FB15k, WN18, and YAGO3-10 datasets. The primary objective of this research lies in forging KGE link prediction methodologies imbued with heightened efficiency and adeptness, thereby addressing salient challenges inherent to real-world applications. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing II)
Show Figures

Figure 1

29 pages, 5108 KiB  
Article
TURBO: The Swiss Knife of Auto-Encoders
by Guillaume Quétant, Yury Belousov, Vitaliy Kinakh and Slava Voloshynovskiy
Entropy 2023, 25(10), 1471; https://doi.org/10.3390/e25101471 - 21 Oct 2023
Cited by 1 | Viewed by 1433
Abstract
We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for [...] Read more.
We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for data with multiple relevant, physics-related representations. The TURBO framework is then introduced, providing a comprehensive derivation of its core concept consisting of the maximisation of mutual information between various data representations expressed in two directions reflecting the information flows. We illustrate that numerous prevalent neural network models are encompassed within this framework. The paper underscores the insufficiency of the information bottleneck concept in elucidating all such models, thereby establishing TURBO as a preferable theoretical reference. The introduction of TURBO contributes to a richer understanding of data representation and the structure of neural network models, enabling more efficient and versatile applications. Full article
(This article belongs to the Special Issue Information Theory for Interpretable Machine Learning)
Show Figures

Figure 1

19 pages, 5707 KiB  
Article
Dynamic Semi-Supervised Federated Learning Fault Diagnosis Method Based on an Attention Mechanism
by Shun Liu, Funa Zhou, Shanjie Tang, Xiong Hu, Chaoge Wang and Tianzhen Wang
Entropy 2023, 25(10), 1470; https://doi.org/10.3390/e25101470 - 21 Oct 2023
Viewed by 1494
Abstract
In cases where a client suffers from completely unlabeled data, unsupervised learning has difficulty achieving an accurate fault diagnosis. Semi-supervised federated learning with the ability for interaction between a labeled client and an unlabeled client has been developed to overcome this difficulty. However, [...] Read more.
In cases where a client suffers from completely unlabeled data, unsupervised learning has difficulty achieving an accurate fault diagnosis. Semi-supervised federated learning with the ability for interaction between a labeled client and an unlabeled client has been developed to overcome this difficulty. However, the existing semi-supervised federated learning methods may lead to a negative transfer problem since they fail to filter out unreliable model information from the unlabeled client. Therefore, in this study, a dynamic semi-supervised federated learning fault diagnosis method with an attention mechanism (SSFL-ATT) is proposed to prevent the federation model from experiencing negative transfer. A federation strategy driven by an attention mechanism was designed to filter out the unreliable information hidden in the local model. SSFL-ATT can ensure the federation model’s performance as well as render the unlabeled client capable of fault classification. In cases where there is an unlabeled client, compared to the existing semi-supervised federated learning methods, SSFL-ATT can achieve increments of 9.06% and 12.53% in fault diagnosis accuracy when datasets provided by Case Western Reserve University and Shanghai Maritime University, respectively, are used for verification. Full article
Show Figures

Figure 1

22 pages, 18479 KiB  
Article
Diffusion Probabilistic Modeling for Video Generation
by Ruihan Yang, Prakhar Srivastava and Stephan Mandt
Entropy 2023, 25(10), 1469; https://doi.org/10.3390/e25101469 - 20 Oct 2023
Cited by 106 | Viewed by 3031
Abstract
Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, end-to-end [...] Read more.
Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, end-to-end optimized video diffusion model inspired by recent advances in neural video compression. The model successively generates future frames by correcting a deterministic next-frame prediction using a stochastic residual generated by an inverse diffusion process. We compare this approach against six baselines on four datasets involving natural and simulation-based videos. We find significant improvements in terms of perceptual quality and probabilistic frame forecasting ability for all datasets. Full article
(This article belongs to the Special Issue Deep Generative Modeling: Theory and Applications)
Show Figures

Figure 1

29 pages, 1775 KiB  
Article
Variational Inference via Rényi Bound Optimization and Multiple-Source Adaptation
by Dana Zalman (Oshri) and Shai Fine
Entropy 2023, 25(10), 1468; https://doi.org/10.3390/e25101468 - 20 Oct 2023
Viewed by 960
Abstract
Variational inference provides a way to approximate probability densities through optimization. It does so by optimizing an upper or a lower bound of the likelihood of the observed data (the evidence). The classic variational inference approach suggests maximizing the Evidence Lower Bound (ELBO). [...] Read more.
Variational inference provides a way to approximate probability densities through optimization. It does so by optimizing an upper or a lower bound of the likelihood of the observed data (the evidence). The classic variational inference approach suggests maximizing the Evidence Lower Bound (ELBO). Recent studies proposed to optimize the variational Rényi bound (VR) and the χ upper bound. However, these estimates, which are based on the Monte Carlo (MC) approximation, either underestimate the bound or exhibit a high variance. In this work, we introduce a new upper bound, termed the Variational Rényi Log Upper bound (VRLU), which is based on the existing VR bound. In contrast to the existing VR bound, the MC approximation of the VRLU bound maintains the upper bound property. Furthermore, we devise a (sandwiched) upper–lower bound variational inference method, termed the Variational Rényi Sandwich (VRS), to jointly optimize the upper and lower bounds. We present a set of experiments, designed to evaluate the new VRLU bound and to compare the VRS method with the classic Variational Autoencoder (VAE) and the VR methods. Next, we apply the VRS approximation to the Multiple-Source Adaptation problem (MSA). MSA is a real-world scenario where data are collected from multiple sources that differ from one another by their probability distribution over the input space. The main aim is to combine fairly accurate predictive models from these sources and create an accurate model for new, mixed target domains. However, many domain adaptation methods assume prior knowledge of the data distribution in the source domains. In this work, we apply the suggested VRS density estimate to the Multiple-Source Adaptation problem (MSA) and show, both theoretically and empirically, that it provides tighter error bounds and improved performance, compared to leading MSA methods. Full article
(This article belongs to the Special Issue Entropy: The Cornerstone of Machine Learning)
Show Figures

Figure 1

17 pages, 8159 KiB  
Article
Denoising Vanilla Autoencoder for RGB and GS Images with Gaussian Noise
by Armando Adrián Miranda-González, Alberto Jorge Rosales-Silva, Dante Mújica-Vargas, Ponciano Jorge Escamilla-Ambrosio, Francisco Javier Gallegos-Funes, Jean Marie Vianney-Kinani, Erick Velázquez-Lozada, Luis Manuel Pérez-Hernández and Lucero Verónica Lozano-Vázquez
Entropy 2023, 25(10), 1467; https://doi.org/10.3390/e25101467 - 20 Oct 2023
Cited by 1 | Viewed by 981
Abstract
Noise suppression algorithms have been used in various tasks such as computer vision, industrial inspection, and video surveillance, among others. The robust image processing systems need to be fed with images closer to a real scene; however, sometimes, due to external factors, the [...] Read more.
Noise suppression algorithms have been used in various tasks such as computer vision, industrial inspection, and video surveillance, among others. The robust image processing systems need to be fed with images closer to a real scene; however, sometimes, due to external factors, the data that represent the image captured are altered, which is translated into a loss of information. In this way, there are required procedures to recover data information closest to the real scene. This research project proposes a Denoising Vanilla Autoencoding (DVA) architecture by means of unsupervised neural networks for Gaussian denoising in color and grayscale images. The methodology improves other state-of-the-art architectures by means of objective numerical results. Additionally, a validation set and a high-resolution noisy image set are used, which reveal that our proposal outperforms other types of neural networks responsible for suppressing noise in images. Full article
(This article belongs to the Special Issue Pattern Recognition and Data Clustering in Information Theory)
Show Figures

Figure 1

37 pages, 543 KiB  
Article
Variable-Length Resolvability for General Sources and Channels
by Hideki Yagi and Te Sun Han
Entropy 2023, 25(10), 1466; https://doi.org/10.3390/e25101466 - 19 Oct 2023
Viewed by 798
Abstract
We introduce the problem of variable-length (VL) source resolvability, in which a given target probability distribution is approximated by encoding a VL uniform random number, and the asymptotically minimum average length rate of the uniform random number, called the VL resolvability, is investigated. [...] Read more.
We introduce the problem of variable-length (VL) source resolvability, in which a given target probability distribution is approximated by encoding a VL uniform random number, and the asymptotically minimum average length rate of the uniform random number, called the VL resolvability, is investigated. We first analyze the VL resolvability with the variational distance as an approximation measure. Next, we investigate the case under the divergence as an approximation measure. When the asymptotically exact approximation is required, it is shown that the resolvability under two kinds of approximation measures coincides. We then extend the analysis to the case of channel resolvability, where the target distribution is the output distribution via a general channel due to a fixed general source as an input. The obtained characterization of channel resolvability is fully general in the sense that, when the channel is just an identity mapping, it reduces to general formulas for source resolvability. We also analyze the second-order VL resolvability. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory II)
Show Figures

Figure 1

16 pages, 6853 KiB  
Article
Quantifying Soil Complexity Using Fisher Shannon Method on 3D X-ray Computed Tomography Scans
by Domingos Aguiar, Rômulo Simões Cezar Menezes, Antonio Celso Dantas Antonino, Tatijana Stosic, Ana M. Tarquis and Borko Stosic
Entropy 2023, 25(10), 1465; https://doi.org/10.3390/e25101465 - 19 Oct 2023
Viewed by 1102
Abstract
The conversion of native forest into agricultural land, which is common in many parts of the world, poses important questions regarding soil degradation, demanding further efforts to better understand the effect of land use change on soil functions. With the advent of 3D [...] Read more.
The conversion of native forest into agricultural land, which is common in many parts of the world, poses important questions regarding soil degradation, demanding further efforts to better understand the effect of land use change on soil functions. With the advent of 3D computed tomography techniques and computing power, new methods are becoming available to address this question. In this direction, in the current work we implement a modification of the Fisher–Shannon method, borrowed from information theory, to quantify the complexity of twelve 3D CT soil samples from a sugarcane plantation and twelve samples from a nearby native Atlantic forest in northeastern Brazil. The distinction found between the samples from the sugar plantation and the Atlantic forest site is quite pronounced. The results at the level of 91.7% accuracy were obtained considering the complexity in the Fisher–Shannon plane. Atlantic forest samples are found to be generally more complex than those from the sugar plantation. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

23 pages, 1485 KiB  
Article
Simulation Research on the Relationship between Selected Inconsistency Indices Used in AHP
by Tomasz Starczewski
Entropy 2023, 25(10), 1464; https://doi.org/10.3390/e25101464 - 19 Oct 2023
Viewed by 788
Abstract
The Analytic Hierarchy Process (AHP) is a widely used used multi-criteria decision-making method (MCDM). This method is based on pairwise comparison, which forms the so-called Pairwise Comparison Matrix (PCM). PCMs usually contain some errors, which can have an influence on the eventual results. [...] Read more.
The Analytic Hierarchy Process (AHP) is a widely used used multi-criteria decision-making method (MCDM). This method is based on pairwise comparison, which forms the so-called Pairwise Comparison Matrix (PCM). PCMs usually contain some errors, which can have an influence on the eventual results. In order to avoid incorrect values of priorities, the inconsistency index (ICI) has been introduced in the AHP by Saaty. However, the user of the AHP can encounter many definitions of ICIs, of which values are usually different. Nevertheless, a lot of these indices are based on a similar idea. The values of some pairs of these indices are characterized by high values of a correlation coefficient. In my work, I present some results of Monte Carlo simulation, which allow us to observe the dependencies in AHP. I select some pairs of ICIs and I evaluate values of the Pearson correlation coefficient for them. The results are compared with some scatter plots that show the type of dependencies between selected ICIs. The presented research shows some pairs of indices are closely correlated so that they can be used interchangeably. Full article
Show Figures

Figure 1

9 pages, 746 KiB  
Article
Novel Entropy-Based Phylogenetic Algorithm: A New Approach for Classifying SARS-CoV-2 Variants
by Vladimir Perovic, Sanja Glisic, Milena Veljkovic, Slobodan Paessler and Veljko Veljkovic
Entropy 2023, 25(10), 1463; https://doi.org/10.3390/e25101463 - 19 Oct 2023
Viewed by 952
Abstract
The SARS-CoV-2 virus, the causative agent of COVID-19, is known for its genetic diversity. Virus variants of concern (VOCs) as well as variants of interest (VOIs) are classified by the World Health Organization (WHO) according to their potential risk to global health. This [...] Read more.
The SARS-CoV-2 virus, the causative agent of COVID-19, is known for its genetic diversity. Virus variants of concern (VOCs) as well as variants of interest (VOIs) are classified by the World Health Organization (WHO) according to their potential risk to global health. This study seeks to enhance the identification and classification of such variants by developing a novel bioinformatics criterion centered on the virus’s spike protein (SP1), a key player in host cell entry, immune response, and a mutational hotspot. To achieve this, we pioneered a unique phylogenetic algorithm which calculates EIIP-entropy as a distance measure based on the distribution of the electron–ion interaction potential (EIIP) of amino acids in SP1. This method offers a comprehensive, scalable, and rapid approach to analyze large genomic data sets and predict the impact of specific mutations. This innovative approach provides a robust tool for classifying emergent SARS-CoV-2 variants into potential VOCs or VOIs. It could significantly augment surveillance efforts and understanding of variant characteristics, while also offering potential applicability to the analysis and classification of other emerging viral pathogens and enhancing global readiness against emerging and re-emerging viral pathogens. Full article
(This article belongs to the Special Issue Quantum Processes in Living Systems)
Show Figures

Figure 1

13 pages, 4584 KiB  
Article
Feedback Control of Quantum Correlations in a Cavity Magnomechanical System with Magnon Squeezing
by Mohamed Amazioug, Shailendra Singh, Berihu Teklu and Muhammad Asjad
Entropy 2023, 25(10), 1462; https://doi.org/10.3390/e25101462 - 18 Oct 2023
Cited by 2 | Viewed by 998
Abstract
We suggest a method to improve quantum correlations in cavity magnomechanics, through the use of a coherent feedback loop and magnon squeezing. The entanglement of three bipartition subsystems: photon-phonon, photon-magnon, and phonon-magnon, is significantly improved by the coherent feedback-control method that has been [...] Read more.
We suggest a method to improve quantum correlations in cavity magnomechanics, through the use of a coherent feedback loop and magnon squeezing. The entanglement of three bipartition subsystems: photon-phonon, photon-magnon, and phonon-magnon, is significantly improved by the coherent feedback-control method that has been proposed. In addition, we investigate Einstein-Podolsky-Rosen steering under thermal effects in each of the subsystems. We also evaluate the scheme’s performance and sensitivity to magnon squeezing. Furthermore, we study the comparison between entanglement and Gaussian quantum discord in both steady and dynamical states. Full article
(This article belongs to the Special Issue Advances in Quantum Communication)
Show Figures

Figure 1

15 pages, 597 KiB  
Article
Quantum Honeypots
by Naya Nagy, Marius Nagy, Ghadeer Alazman, Zahra Hawaidi, Saja Mustafa Alsulaibikh, Layla Alabbad, Sadeem Alfaleh and Areej Aljuaid
Entropy 2023, 25(10), 1461; https://doi.org/10.3390/e25101461 - 18 Oct 2023
Viewed by 999
Abstract
Quantum computation offers unique properties that cannot be paralleled by conventional computers. In particular, reading qubits may change their state and thus signal the presence of an intruder. This paper develops a proof-of-concept for a quantum honeypot that allows the detection of intruders [...] Read more.
Quantum computation offers unique properties that cannot be paralleled by conventional computers. In particular, reading qubits may change their state and thus signal the presence of an intruder. This paper develops a proof-of-concept for a quantum honeypot that allows the detection of intruders on reading. The idea is to place quantum sentinels within all resources offered within the honeypot. Additional to classical honeypots, honeypots with quantum sentinels can trace the reading activity of the intruder within any resource. Sentinels can be set to be either visible and accessible to the intruder or hidden and unknown to intruders. Catching the intruder using quantum sentinels has a low theoretical probability per sentinel, but the probability can be increased arbitrarily higher by adding more sentinels. The main contributions of this paper are that the monitoring of the intruder can be carried out at the level of the information unit, such as the bit, and quantum monitoring activity is fully hidden from the intruder. Practical experiments, as performed in this research, show that the error rate of quantum computers has to be considerably reduced before implementations of this concept are feasible. Full article
(This article belongs to the Special Issue Advances in Quantum Computing)
Show Figures

Figure 1

21 pages, 948 KiB  
Article
Looking into the Market Behaviors through the Lens of Correlations and Eigenvalues: An Investigation on the Chinese and US Markets Using RMT
by Yong Tang, Jason Xiong, Zhitao Cheng, Yan Zhuang, Kunqi Li, Jingcong Xie and Yicheng Zhang
Entropy 2023, 25(10), 1460; https://doi.org/10.3390/e25101460 - 18 Oct 2023
Viewed by 1186
Abstract
This research systematically analyzes the behaviors of correlations among stock prices and the eigenvalues for correlation matrices by utilizing random matrix theory (RMT) for Chinese and US stock markets. Results suggest that most eigenvalues of both markets fall within the predicted distribution intervals [...] Read more.
This research systematically analyzes the behaviors of correlations among stock prices and the eigenvalues for correlation matrices by utilizing random matrix theory (RMT) for Chinese and US stock markets. Results suggest that most eigenvalues of both markets fall within the predicted distribution intervals by RMT, whereas some larger eigenvalues fall beyond the noises and carry market information. The largest eigenvalue represents the market and is a good indicator for averaged correlations. Further, the average largest eigenvalue shows similar movement with the index for both markets. The analysis demonstrates the fraction of eigenvalues falling beyond the predicted interval, pinpointing major market switching points. It has identified that the average of eigenvector components corresponds to the largest eigenvalue switch with the market itself. The investigation on the second largest eigenvalue and its eigenvector suggests that the Chinese market is dominated by four industries whereas the US market contains three leading industries. The study later investigates how it changes before and after a market crash, revealing that the two markets behave differently, and a major market structure change is observed in the Chinese market but not in the US market. The results shed new light on mining hidden information from stock market data. Full article
Show Figures

Figure 1

12 pages, 468 KiB  
Communication
Optimal Estimation of Quantum Coherence by Bell State Measurement: A Case Study
by Yuan Yuan, Xufeng Huang, Yueping Niu and Shangqing Gong
Entropy 2023, 25(10), 1459; https://doi.org/10.3390/e25101459 - 17 Oct 2023
Cited by 1 | Viewed by 870
Abstract
Quantum coherence is the most distinguished feature of quantum mechanics. As an important resource, it is widely applied to quantum information technologies, including quantum algorithms, quantum computation, quantum key distribution, and quantum metrology, so it is important to develop tools for efficient estimation [...] Read more.
Quantum coherence is the most distinguished feature of quantum mechanics. As an important resource, it is widely applied to quantum information technologies, including quantum algorithms, quantum computation, quantum key distribution, and quantum metrology, so it is important to develop tools for efficient estimation of the coherence. Bell state measurement plays an important role in quantum information processing. In particular, it can also, as a two-copy collective measurement, directly measure the quantum coherence of an unknown quantum state in the experiment, and does not need any optimization procedures, feedback, or complex mathematical calculations. In this paper, we analyze the performance of estimating quantum coherence with Bell state measurement for a qubit case from the perspective of semiparametric estimation and single-parameter estimation. The numerical results show that Bell state measurement is the optimal measurement for estimating several frequently-used coherence quantifiers, and it has been demonstrated in the perspective of the quantum limit of semiparametric estimation and Fisher information. Full article
(This article belongs to the Special Issue Editorial Board Members' Collection Series on Quantum Entanglement)
Show Figures

Figure 1

15 pages, 1607 KiB  
Article
Entanglement Degradation in Two Interacting Qubits Coupled to Dephasing Environments
by Rahma Abdelmagid, Khadija Alshehhi and Gehad Sadiek
Entropy 2023, 25(10), 1458; https://doi.org/10.3390/e25101458 - 17 Oct 2023
Viewed by 904
Abstract
One of the main obstacles toward building efficient quantum computing systems is decoherence, where the inevitable interaction between the qubits and the surrounding environment leads to a vanishing entanglement. We consider a system of two interacting asymmetric two-level atoms (qubits) in the presence [...] Read more.
One of the main obstacles toward building efficient quantum computing systems is decoherence, where the inevitable interaction between the qubits and the surrounding environment leads to a vanishing entanglement. We consider a system of two interacting asymmetric two-level atoms (qubits) in the presence of pure and correlated dephasing environments. We study the dynamics of entanglement while varying the interaction strength between the two qubits, their relative frequencies, and their coupling strength to the environment starting from different initial states of practical interest. The impact of the asymmetry of the two qubits, reflected in their different frequencies and coupling strengths to the environment, varies significantly depending on the initial state of the system and its degree of anisotropy. For an initial disentangled, or a Werner, state, as the difference between the frequencies increases, the entanglement decay rate increases, with more persistence at the higher degrees of anisotropy in the former state. However, for an initial anti-correlated Bell state, the entanglement decays more rapidly in the symmetric case compared with the asymmetric one. The difference in the coupling strengths of the two qubits to the pure (uncorrelated) dephasing environment leads to higher entanglement decay in the different initial state cases, though the rate varies depending on the degree of anisotropy and the initial state. Interestingly, the correlated dephasing environment, within a certain range, was found to enhance the entanglement dynamics starting from certain initial states, such as the disentangled, anti-correlated Bell, and Werner, whereas it exhibits a decaying effect in other cases such as the initial correlated Bell state. Full article
Show Figures

Figure 1

13 pages, 2586 KiB  
Article
Synchronization of Complex Dynamical Networks with Stochastic Links Dynamics
by Juanxia Zhao, Yinhe Wang, Peitao Gao, Shengping Li and Yi Peng
Entropy 2023, 25(10), 1457; https://doi.org/10.3390/e25101457 - 17 Oct 2023
Cited by 1 | Viewed by 815
Abstract
The mean square synchronization problem of the complex dynamical network (CDN) with the stochastic link dynamics is investigated. In contrast to previous literature, the CDN considered in this paper can be viewed as consisting of two subsystems coupled to each other. One subsystem [...] Read more.
The mean square synchronization problem of the complex dynamical network (CDN) with the stochastic link dynamics is investigated. In contrast to previous literature, the CDN considered in this paper can be viewed as consisting of two subsystems coupled to each other. One subsystem consists of all nodes, referred to as the nodes subsystem, and the other consists of all links, referred to as the network topology subsystem, where the weighted values can quantitatively reflect changes in the network’s topology. Based on the above understanding of CDN, two vector stochastic differential equations with Brownian motion are used to model the dynamic behaviors of nodes and links, respectively. The control strategy incorporates not only the controller in the nodes but also the coupling term in the links, through which the CDN is synchronized in the mean-square sense. Meanwhile, the dynamic stochastic signal is proposed in this paper, which is regarded as the auxiliary reference tracking target of links, such that the links can track the reference target asymptotically when synchronization occurs in nodes. This implies that the eventual topological structure of CDN is stochastic. Finally, a comparison simulation example confirms the superiority of the control strategy in this paper. Full article
(This article belongs to the Special Issue Synchronization in Time-Evolving Complex Networks)
Show Figures

Figure 1

15 pages, 631 KiB  
Article
Optimizing Finite-Blocklength Nested Linear Secrecy Codes: Using the Worst Code to Find the Best Code
by Morteza Shoushtari and Willie Harrison
Entropy 2023, 25(10), 1456; https://doi.org/10.3390/e25101456 - 17 Oct 2023
Viewed by 873
Abstract
Nested linear coding is a widely used technique in wireless communication systems for improving both security and reliability. Some parameters, such as the relative generalized Hamming weight and the relative dimension/length profile, can be used to characterize the performance of nested linear codes. [...] Read more.
Nested linear coding is a widely used technique in wireless communication systems for improving both security and reliability. Some parameters, such as the relative generalized Hamming weight and the relative dimension/length profile, can be used to characterize the performance of nested linear codes. In addition, the rank properties of generator and parity-check matrices can also precisely characterize their security performance. Despite this, finding optimal nested linear secrecy codes remains a challenge in the finite-blocklength regime, often requiring brute-force search methods. This paper investigates the properties of nested linear codes, introduces a new representation of the relative generalized Hamming weight, and proposes a novel method for finding the best nested linear secrecy code for the binary erasure wiretap channel by working from the worst nested linear secrecy code in the dual space. We demonstrate that our algorithm significantly outperforms the brute-force technique in terms of speed and efficiency. Full article
(This article belongs to the Special Issue Information Theory and Coding for Wireless Communications II)
Show Figures

Figure 1

18 pages, 1464 KiB  
Article
Recovering Power Grids Using Strategies Based on Network Metrics and Greedy Algorithms
by Fenghua Wang, Hale Cetinay, Zhidong He, Le Liu, Piet Van Mieghem and Robert E. Kooij
Entropy 2023, 25(10), 1455; https://doi.org/10.3390/e25101455 - 17 Oct 2023
Cited by 1 | Viewed by 909
Abstract
For this study, we investigated efficient strategies for the recovery of individual links in power grids governed by the direct current (DC) power flow model, under random link failures. Our primary objective was to explore the efficacy of recovering failed links based solely [...] Read more.
For this study, we investigated efficient strategies for the recovery of individual links in power grids governed by the direct current (DC) power flow model, under random link failures. Our primary objective was to explore the efficacy of recovering failed links based solely on topological network metrics. In total, we considered 13 recovery strategies, which encompassed 2 strategies based on link centrality values (link betweenness and link flow betweenness), 8 strategies based on the products of node centrality values at link endpoints (degree, eigenvector, weighted eigenvector, closeness, electrical closeness, weighted electrical closeness, zeta vector, and weighted zeta vector), and 2 heuristic strategies (greedy recovery and two-step greedy recovery), in addition to the random recovery strategy. To evaluate the performance of these proposed strategies, we conducted simulations on three distinct power systems: the IEEE 30, IEEE 39, and IEEE 118 systems. Our findings revealed several key insights: Firstly, there were notable variations in the performance of the recovery strategies based on topological network metrics across different power systems. Secondly, all such strategies exhibited inferior performance when compared to the heuristic recovery strategies. Thirdly, the two-step greedy recovery strategy consistently outperformed the others, with the greedy recovery strategy ranking second. Based on our results, we conclude that relying solely on a single metric for the development of a recovery strategy is insufficient when restoring power grids following link failures. By comparison, recovery strategies employing greedy algorithms prove to be more effective choices. Full article
(This article belongs to the Special Issue Nonlinear Dynamical Behaviors in Complex Systems)
Show Figures

Figure 1

18 pages, 483 KiB  
Article
Efficient Communications in V2V Networks with Two-Way Lanes Based on Random Linear Network Coding
by Yiqian Zhang, Tiantian Zhu and Congduan Li
Entropy 2023, 25(10), 1454; https://doi.org/10.3390/e25101454 - 17 Oct 2023
Viewed by 797
Abstract
Vehicle-to-vehicle (V2V) communication has gained significant attention in the field of intelligent transportation systems. In this paper, we focus on communication scenarios involving vehicles moving in the same and opposite directions. Specifically, we model a V2V network as a dynamic multi-source single-sink network [...] Read more.
Vehicle-to-vehicle (V2V) communication has gained significant attention in the field of intelligent transportation systems. In this paper, we focus on communication scenarios involving vehicles moving in the same and opposite directions. Specifically, we model a V2V network as a dynamic multi-source single-sink network with two-way lanes. To address rapid changes in network topology, we employ random linear network coding (RLNC), which eliminates the need for knowledge of the network topology. We begin by deriving the lower bound for the generation probability. Through simulations, we analyzed the probability distribution and cumulative probability distribution of latency under varying packet loss rates and batch sizes. Our results demonstrated that our RLNC scheme significantly reduced the communication latency, even under challenging channel conditions, when compared to the non-coding case. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

14 pages, 317 KiB  
Article
The Fundamental Tension in Integrated Information Theory 4.0’s Realist Idealism
by Ignacio Cea, Niccolo Negro and Camilo Miguel Signorelli
Entropy 2023, 25(10), 1453; https://doi.org/10.3390/e25101453 - 16 Oct 2023
Cited by 1 | Viewed by 1685
Abstract
Integrated Information Theory (IIT) is currently one of the most influential scientific theories of consciousness. Here, we focus specifically on a metaphysical aspect of the theory’s most recent version (IIT 4.0), what we may call its idealistic ontology, and its tension with [...] Read more.
Integrated Information Theory (IIT) is currently one of the most influential scientific theories of consciousness. Here, we focus specifically on a metaphysical aspect of the theory’s most recent version (IIT 4.0), what we may call its idealistic ontology, and its tension with a kind of realism about the external world that IIT also endorses. IIT 4.0 openly rejects the mainstream view that consciousness is generated by the brain, positing instead that consciousness is ontologically primary while the physical domain is just “operational”. However, this philosophical position is presently underdeveloped and is not rigorously formulated in IIT, potentially leading to many misinterpretations and undermining its overall explanatory power. In the present paper we aim to address this issue. We argue that IIT’s idealistic ontology should be understood as a specific combination of phenomenal primitivism, reductionism regarding Φ-structures and complexes, and eliminativism about non-conscious physical entities. Having clarified this, we then focus on the problematic tension between IIT’s idealistic ontology and its simultaneous endorsement of realism, according to which there is some kind of external reality independent of our minds. After refuting three potential solutions to this theoretical tension, we propose the most plausible alternative: understanding IIT’s realism as an assertion of the existence of other experiences beyond one’s own, what we call a non-solipsistic idealist realism. We end with concluding remarks and future research avenues. Full article
(This article belongs to the Special Issue Integrated Information Theory and Consciousness II)
24 pages, 7476 KiB  
Article
Early Fault Detection of Rolling Bearings Based on Time-Varying Filtering Empirical Mode Decomposition and Adaptive Multipoint Optimal Minimum Entropy Deconvolution Adjusted
by Shuo Song and Wenbo Wang
Entropy 2023, 25(10), 1452; https://doi.org/10.3390/e25101452 - 16 Oct 2023
Viewed by 1177
Abstract
Due to the early formation of rolling bearing fault characteristics in an environment with strong background noise, the single use of the time-varying filtering empirical mode decomposition (TVFEMD) method is not effective for the extraction of fault characteristics. To solve this problem, a [...] Read more.
Due to the early formation of rolling bearing fault characteristics in an environment with strong background noise, the single use of the time-varying filtering empirical mode decomposition (TVFEMD) method is not effective for the extraction of fault characteristics. To solve this problem, a new method for early fault detection of rolling bearings is proposed, which combines multipoint optimal minimum entropy deconvolution adjusted (MOMEDA) with parameter optimization and TVFEMD. Firstly, a new weighted envelope spectrum kurtosis index is constructed using the correlation coefficient and envelope spectrum kurtosis, which is used to identify the effective component and noise component of the bearing fault signal decomposed by TVFEMD, and the intrinsic mode function (IMF) containing rich fault information is selected for reconstruction. Then, a new synthetic impact index (SII) is constructed by combining the maximum value of the autocorrelation function and the kurtosis of the envelope spectrum. The SII index is used as the fitness function of the gray wolf optimization algorithm to optimize the fault period, T, and the filter length, L, of MOMDEA. The signal reconstructed by TVF-EMD undergoes adaptive filtering using the MOMEDA method after parameter optimization. Finally, an envelope spectrum analysis is performed on the signal filtered by the adaptive MOMEDA method to extract fault feature information. The experimental results of the simulated and measured signals indicate that this method can effectively extract early fault features of rolling bearings and has good reliability. Compared to the classical FSK, MCKD, and TVFEMD-MOMEDA methods, the first-order correlated kurtosis (FCK) and fault feature coefficient (FFC) of the filtered signal obtained using the proposed method are the largest, while the sample entropy (SE) and envelope spectrum entropy (ESE) are the smallest. Full article
Show Figures

Figure 1

16 pages, 7280 KiB  
Article
Automatic P-Phase-Onset-Time-Picking Method of Microseismic Monitoring Signal of Underground Mine Based on Noise Reduction and Multiple Detection Indexes
by Rui Dai, Yibo Wang, Da Zhang and Hu Ji
Entropy 2023, 25(10), 1451; https://doi.org/10.3390/e25101451 - 16 Oct 2023
Viewed by 838
Abstract
The underground pressure disaster caused by the exploitation of deep mineral resources has become a major hidden danger restricting the safe production of mines. Microseismic monitoring technology is a universally recognized means of underground pressure monitoring and early warning. In this paper, the [...] Read more.
The underground pressure disaster caused by the exploitation of deep mineral resources has become a major hidden danger restricting the safe production of mines. Microseismic monitoring technology is a universally recognized means of underground pressure monitoring and early warning. In this paper, the wavelet coefficient threshold denoising method in the time–frequency domain, STA/LTA method, AIC method, and skew and kurtosis method are studied, and the automatic P-phase-onset-time-picking model based on noise reduction and multiple detection indexes is established. Through the effect analysis of microseismic signals collected by microseismic monitoring system of coral Tungsten Mine in Guangxi, automatic P-phase onset time picking is realized, the reliability of the P-phase-onset-time-picking method proposed in this paper based on noise reduction and multiple detection indexes is verified. The picking accuracy can still be guaranteed under the severe signal interference of background noise, power frequency interference and manual activity in the underground mine, which is of great significance to the data processing and analysis of microseismic monitoring. Full article
Show Figures

Figure 1

16 pages, 429 KiB  
Article
Pullback Bundles and the Geometry of Learning
by Stéphane Puechmorel
Entropy 2023, 25(10), 1450; https://doi.org/10.3390/e25101450 - 15 Oct 2023
Viewed by 980
Abstract
Explainable Artificial Intelligence (XAI) and acceptable artificial intelligence are active topics of research in machine learning. For critical applications, being able to prove or at least to ensure with a high probability the correctness of algorithms is of utmost importance. In practice, however, [...] Read more.
Explainable Artificial Intelligence (XAI) and acceptable artificial intelligence are active topics of research in machine learning. For critical applications, being able to prove or at least to ensure with a high probability the correctness of algorithms is of utmost importance. In practice, however, few theoretical tools are known that can be used for this purpose. Using the Fisher Information Metric (FIM) on the output space yields interesting indicators in both the input and parameter spaces, but the underlying geometry is not yet fully understood. In this work, an approach based on the pullback bundle, a well-known trick for describing bundle morphisms, is introduced and applied to the encoder–decoder block. With constant rank hypothesis on the derivative of the network with respect to its inputs, a description of its behavior is obtained. Further generalization is gained through the introduction of the pullback generalized bundle that takes into account the sensitivity with respect to weights. Full article
(This article belongs to the Special Issue Information Geometry for Data Analysis)
Show Figures

Figure 1

11 pages, 2235 KiB  
Article
Mutual Information and Correlations across Topological Phase Transitions in Topologically Ordered Graphene Zigzag Nanoribbons
by In-Hwan Lee, Hoang-Anh Le and S.-R. Eric Yang
Entropy 2023, 25(10), 1449; https://doi.org/10.3390/e25101449 - 15 Oct 2023
Cited by 1 | Viewed by 1023
Abstract
Graphene zigzag nanoribbons, initially in a topologically ordered state, undergo a topological phase transition into crossover phases distinguished by quasi-topological order. We computed mutual information for both the topologically ordered phase and its crossover phases, revealing the following results: (i) In the topologically [...] Read more.
Graphene zigzag nanoribbons, initially in a topologically ordered state, undergo a topological phase transition into crossover phases distinguished by quasi-topological order. We computed mutual information for both the topologically ordered phase and its crossover phases, revealing the following results: (i) In the topologically ordered phase, A-chirality carbon lines strongly entangle with B-chirality carbon lines on the opposite side of the zigzag ribbon. This entanglement persists but weakens in crossover phases. (ii) The upper zigzag edge entangles with non-edge lines of different chirality on the opposite side of the ribbon. (iii) Entanglement increases as more carbon lines are grouped together, regardless of the lines’ chirality. No long-range entanglement was found in the symmetry-protected phase in the absence of disorder. Full article
(This article belongs to the Special Issue Entanglement Entropy and Quantum Phase Transition)
Show Figures

Figure 1

10 pages, 283 KiB  
Article
Information-Theoretic Models for Physical Observables
by D. Bernal-Casas and J. M. Oller
Entropy 2023, 25(10), 1448; https://doi.org/10.3390/e25101448 - 14 Oct 2023
Viewed by 1073
Abstract
This work addresses J.A. Wheeler’s critical idea that all things physical are information-theoretic in origin. In this paper, we introduce a novel mathematical framework based on information geometry, using the Fisher information metric as a particular Riemannian metric, defined in the parameter space [...] Read more.
This work addresses J.A. Wheeler’s critical idea that all things physical are information-theoretic in origin. In this paper, we introduce a novel mathematical framework based on information geometry, using the Fisher information metric as a particular Riemannian metric, defined in the parameter space of a smooth statistical manifold of normal probability distributions. Following this approach, we study the stationary states with the time-independent Schrödinger’s equation to discover that the information could be represented and distributed over a set of quantum harmonic oscillators, one for each independent source of data, whose coordinate for each oscillator is a parameter of the smooth statistical manifold to estimate. We observe that the estimator’s variance equals the energy levels of the quantum harmonic oscillator, proving that the estimator’s variance is definitively quantized, being the minimum variance at the minimum energy level of the oscillator. Interestingly, we demonstrate that quantum harmonic oscillators reach the Cramér–Rao lower bound on the estimator’s variance at the lowest energy level. In parallel, we find that the global probability density function of the collective mode of a set of quantum harmonic oscillators at the lowest energy level equals the posterior probability distribution calculated using Bayes’ theorem from the sources of information for all data values, taking as a prior the Riemannian volume of the informative metric. Interestingly, the opposite is also true, as the prior is constant. Altogether, these results suggest that we can break the sources of information into little elements: quantum harmonic oscillators, with the square modulus of the collective mode at the lowest energy representing the most likely reality, supporting A. Zeilinger’s recent statement that the world is not broken into physical but informational parts. Full article
23 pages, 2343 KiB  
Article
Entropy Optimization by Redesigning Organization in Hospital Operations
by Windi Winasti, Hubert Berden and Frits van Merode
Entropy 2023, 25(10), 1447; https://doi.org/10.3390/e25101447 - 14 Oct 2023
Viewed by 839
Abstract
A redesign of hospitals (i.e., partitioning departments and delegating decision authority) may be needed to deal with variable demand. Uncertain demands and throughput times often need short reaction times. In this study, we develop quantitative methods to guide a redesign through an information-processing [...] Read more.
A redesign of hospitals (i.e., partitioning departments and delegating decision authority) may be needed to deal with variable demand. Uncertain demands and throughput times often need short reaction times. In this study, we develop quantitative methods to guide a redesign through an information-processing approach. To demonstrate how the methods can be used in practice, we tested them by applying them to a large perinatology care system in the Netherlands. We used the following two methods: 1. portfolio optimization and 2. efficient coordination of workload and reallocation of nurses. Our case study of a large perinatology care system showed that several designs of clustered units minimized the demand uncertainty in the perinatology care system. For the coordination strategy, the information and decision uncertainty is minimized when the decision power is positioned at the operation level and with the help of a centralized information system. When the operation decision-making power is not supplemented with the centralized and system-wide information system, hospitals can better use the hierarchy model, where the manager holds decision-making power with a system-wide overview. We also found that the speed of decision-making in real-time depends on the level of information aggregation set up by the system. We conclude that combining the correlation perspectives and the entropy theory is a way of quantifying how organizations can be (re)designed. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 9687 KiB  
Article
Research on Three-Phase Asynchronous Motor Fault Diagnosis Based on Multiscale Weibull Dispersion Entropy
by Fengyun Xie, Enguang Sun, Shengtong Zhou, Jiandong Shang, Yang Wang and Qiuyang Fan
Entropy 2023, 25(10), 1446; https://doi.org/10.3390/e25101446 - 13 Oct 2023
Viewed by 1010
Abstract
Three-phase asynchronous motors have a wide range of applications in the machinery industry and fault diagnosis aids in the healthy operation of a motor. In order to improve the accuracy and generalization of fault diagnosis in three-phase asynchronous motors, this paper proposes a [...] Read more.
Three-phase asynchronous motors have a wide range of applications in the machinery industry and fault diagnosis aids in the healthy operation of a motor. In order to improve the accuracy and generalization of fault diagnosis in three-phase asynchronous motors, this paper proposes a three-phase asynchronous motor fault diagnosis method based on the combination of multiscale Weibull dispersive entropy (WB-MDE) and particle swarm optimization–support vector machine (PSO-SVM). Firstly, the Weibull distribution (WB) is used to linearize and smooth the vibration signals to obtain sharper information about the motor state. Secondly, the quantitative features of the regularity and orderliness of a given sequence are extracted using multiscale dispersion entropy (MDE). Then, a support vector machine (SVM) is used to construct a classifier, the parameters are optimized via the particle swarm optimization (PSO) algorithm, and the extracted feature vectors are fed into the optimized SVM model for classification and recognition. Finally, the accuracy and generalization of the model proposed in this paper are tested by adding raw data with Gaussian white noise with different signal-to-noise ratios and the CHIST-ERA SOON public dataset. This paper builds a three-phase asynchronous motor vibration signal experimental platform, through a piezoelectric acceleration sensor to discern the four states of the motor data, to verify the effectiveness of the proposed method. The accuracy of the collected data using the WB-MDE method proposed in this paper for feature extraction and the extracted features using the optimization of the PSO-SVM method for fault classification and identification is 100%. Additionally, the proposed model is tested for noise resistance and generalization. Finally, the superiority of the present method is verified through experiments as well as noise immunity and generalization tests. Full article
(This article belongs to the Special Issue Entropy Applications in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop