Topic Editors

Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Dr. Yunfei Gao
Shanghai Engineering Research Center of Coal Gasification, East China University of Science and Technology, Shanghai 200237, China
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 13/15 Armii Krajowej Av., 42-200 Czestochowa, Poland
Dr. Ghulam Moeen Uddin
Department of Mechanical Engineering, University of Engineering & Technology, Lahore, Punjab 54890, Pakistan
Dr. Anna Kulakowska
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Division of Advanced Computational Methods, Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 42-200 Czestochowa, Poland
Dr. Bachil El Fil
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

Artificial Intelligence and Computational Methods: Modeling, Simulations and Optimization of Complex Systems

Abstract submission deadline
closed (30 September 2022)
Manuscript submission deadline
20 October 2023
Viewed by
73729

Topic Information

Dear Colleagues,

Due to the increasing computational capability of current data processing systems, new opportunities emerge in the modeling, simulations, and optimization of complex systems and devices. Methods that are difficult to apply, highly demanding, and time-consuming may now be considered when developing complete and sophisticated models in many areas of science and technology. The combination of computational methods and AI algorithms allows conducting multi-threaded analyses to solve advanced and interdisciplinary problems. This article collection aims to bring together research on advances in modeling, simulations, and optimization issues of complex systems. Original research, as well as review articles and short communications, with a particular focus on (but not limited to) artificial intelligence and other computational methods, are welcomed.

Prof. Dr. Jaroslaw Krzywanski
Dr. Yunfei Gao
Dr. Marcin Sosnowski
Dr. Karolina Grabowska
Dr. Dorian Skrobek
Dr. Ghulam Moeen Uddin
Dr. Anna Kulakowska
Dr. Anna Zylka
Dr. Bachil El Fil
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • artificial neural networks
  • deep learning
  • genetic and evolutionary algorithms
  • artificial immune systems
  • fuzzy logic
  • expert systems
  • bio-inspired methods
  • CFD
  • modeling
  • simulation
  • optimization
  • complex systems

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Entropy
entropy
2.7 4.7 1999 20.4 Days CHF 2600 Submit
Algorithms
algorithms
2.3 3.7 2008 19.1 Days CHF 1600 Submit
Computation
computation
2.2 3.3 2013 16.3 Days CHF 1600 Submit
Machine Learning and Knowledge Extraction
make
3.9 8.5 2019 19.2 Days CHF 1400 Submit
Energies
energies
3.2 5.5 2008 15.7 Days CHF 2600 Submit
Materials
materials
3.4 5.2 2008 14.7 Days CHF 2600 Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (62 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Automatic Genre Identification for Robust Enrichment of Massive Text Collections: Investigation of Classification Methods in the Era of Large Language Models
Mach. Learn. Knowl. Extr. 2023, 5(3), 1149-1175; https://doi.org/10.3390/make5030059 - 12 Sep 2023
Viewed by 356
Abstract
Massive text collections are the backbone of large language models, the main ingredient of the current significant progress in artificial intelligence. However, as these collections are mostly collected using automatic methods, researchers have few insights into what types of texts they consist of. [...] Read more.
Massive text collections are the backbone of large language models, the main ingredient of the current significant progress in artificial intelligence. However, as these collections are mostly collected using automatic methods, researchers have few insights into what types of texts they consist of. Automatic genre identification is a text classification task that enriches texts with genre labels, such as promotional and legal, providing meaningful insights into the composition of these large text collections. In this paper, we evaluate machine learning approaches for the genre identification task based on their generalizability across different datasets to assess which model is the most suitable for the downstream task of enriching large web corpora with genre information. We train and test multiple fine-tuned BERT-like Transformer-based models and show that merging different genre-annotated datasets yields superior results. Moreover, we explore the zero-shot capabilities of large GPT Transformer models in this task and discuss the advantages and disadvantages of the zero-shot approach. We also publish the best-performing fine-tuned model that enables automatic genre annotation in multiple languages. In addition, to promote further research in this area, we plan to share, upon request, a new benchmark for automatic genre annotation, ensuring the non-exposure of the latest large language models. Full article
Show Figures

Figure 1

Article
Artificial Neural Networks for Predicting the Diameter of Electrospun Nanofibers Synthesized from Solutions/Emulsions of Biopolymers and Oils
Materials 2023, 16(16), 5720; https://doi.org/10.3390/ma16165720 - 21 Aug 2023
Viewed by 414
Abstract
In the present work, different configurations of nt iartificial neural networks (ANNs) were analyzed in order to predict the experimental diameter of nanofibers produced by means of the electrospinning process and employing polyvinyl alcohol (PVA), PVA/chitosan (CS) and PVA/aloe vera (Av) solutions. [...] Read more.
In the present work, different configurations of nt iartificial neural networks (ANNs) were analyzed in order to predict the experimental diameter of nanofibers produced by means of the electrospinning process and employing polyvinyl alcohol (PVA), PVA/chitosan (CS) and PVA/aloe vera (Av) solutions. In addition, gelatin type A (GT)/alpha-tocopherol (α-TOC), PVA/olive oil (OO), PVA/orange essential oil (OEO), and PVA/anise oil (AO) emulsions were used. The experimental diameters of the nanofibers electrospun from the different tested systems were obtained using scanning electron microscopy (SEM) and ranged from 93.52 nm to 352.1 nm. Of the three studied ANNs, the one that displayed the best prediction results was the one with three hidden layers with the flow rate, voltage, viscosity, and conductivity variables. The calculation error between the experimental and calculated diameters was 3.79%. Additionally, the correlation coefficient (R2) was identified as a function of the ANN configuration, obtaining values of 0.96, 0.98, and 0.98 for one, two, and three hidden layer(s), respectively. It was found that an ANN configuration having more than three hidden layers did not improve the prediction of the experimental diameter of synthesized nanofibers. Full article
Show Figures

Figure 1

Review
Physical and Mathematical Models of Micro-Explosions: Achievements and Directions of Improvement
Energies 2023, 16(16), 6034; https://doi.org/10.3390/en16166034 - 17 Aug 2023
Viewed by 317
Abstract
The environmental, economic, and energy problems of the modern world motivate the development of alternative fuel technologies. Multifuel technology can help reduce the carbon footprint and waste from the raw materials sector as well as slow down the depletion of energy resources. However, [...] Read more.
The environmental, economic, and energy problems of the modern world motivate the development of alternative fuel technologies. Multifuel technology can help reduce the carbon footprint and waste from the raw materials sector as well as slow down the depletion of energy resources. However, there are limitations to the active use of multifuel mixtures in real power plants and engines because they are difficult to spray in combustion chambers and require secondary atomization. Droplet micro-explosion seems the most promising secondary atomization technology in terms of its integral characteristics. This review paper outlines the most interesting approaches to modeling micro-explosions using in-house computer codes and commercial software packages. A physical model of a droplet micro-explosion based on experimental data was analyzed to highlight the schemes and mathematical expressions describing the critical conditions of parent droplet atomization. Approaches are presented that can predict the number, sizes, velocities, and trajectories of emerging child droplets. We also list the empirical data necessary for developing advanced fragmentation models. Finally, we outline the main growth areas for micro-explosion models catering for the needs of spray technology. Full article
Show Figures

Figure 1

Article
Identifying the Regions of a Space with the Self-Parameterized Recursively Assessed Decomposition Algorithm (SPRADA)
Mach. Learn. Knowl. Extr. 2023, 5(3), 979-1009; https://doi.org/10.3390/make5030051 - 04 Aug 2023
Viewed by 534
Abstract
This paper introduces a non-parametric methodology based on classical unsupervised clustering techniques to automatically identify the main regions of a space, without requiring the objective number of clusters, so as to identify the major regular states of unknown industrial systems. Indeed, useful knowledge [...] Read more.
This paper introduces a non-parametric methodology based on classical unsupervised clustering techniques to automatically identify the main regions of a space, without requiring the objective number of clusters, so as to identify the major regular states of unknown industrial systems. Indeed, useful knowledge on real industrial processes entails the identification of their regular states, and their historically encountered anomalies. Since both should form compact and salient groups of data, unsupervised clustering generally performs this task fairly accurately; however, this often requires the number of clusters upstream, knowledge which is rarely available. As such, the proposed algorithm operates a first partitioning of the space, then it estimates the integrity of the clusters, and splits them again and again until every cluster obtains an acceptable integrity; finally, a step of merging based on the clusters’ empirical distributions is performed to refine the partitioning. Applied to real industrial data obtained in the scope of a European project, this methodology proved able to automatically identify the main regular states of the system. Results show the robustness of the proposed approach in the fully-automatic and non-parametric identification of the main regions of a space, knowledge which is useful to industrial anomaly detection and behavioral modeling. Full article
Show Figures

Figure 1

Article
Optimization of Circulating Fluidized Bed Boiler Combustion Key Control Parameters Based on Machine Learning
Energies 2023, 16(15), 5674; https://doi.org/10.3390/en16155674 - 28 Jul 2023
Viewed by 339
Abstract
During the coal-fired circulating fluidized bed unit participation in the peak regulation process of the power grid, the thermal automatic control system assists the operator to adjust the mode focusing on pollutant control and ignoring the economy so that the unit’s operating performance [...] Read more.
During the coal-fired circulating fluidized bed unit participation in the peak regulation process of the power grid, the thermal automatic control system assists the operator to adjust the mode focusing on pollutant control and ignoring the economy so that the unit’s operating performance maintains a huge potential for deep mining. The high-dimensional and coupling-related data characteristics of circulating fluidized bed boilers put forward more refined and demanding requirements for combustion optimization analysis and open-loop guidance operation. Therefore, this paper proposes a combustion optimization method that incorporates neighborhood rough set machine learning. This method first reduces the control parameters affecting multi-objective combustion optimization with the neighborhood rough set algorithm that fully considers the correlation of each variable combination and then establishes a multi-objective combustion optimization prediction model by combining the online calculation of boiler thermal efficiency. Finally, the NSGAII algorithm realizes the optimization of the control parameter setting value of the boiler combustion system. The results show that this method reduces the number of control commands involved in combustion optimization adjustment from 26 to 11. At the same time, based on the optimization results obtained by using traditional combustion optimization methods under high, medium, and medium-low load conditions, the boiler thermal efficiency increased by 0.07%, decreased by 0.02%, and increased by 0.55%, respectively, and the nitrogen oxide emission concentration decreased by 5.02 mg/Nm3, 7.77 mg/Nm3, and 7.03 mg/Nm3, respectively. The implementation of this method can help better account for the economy and pollutant discharge of the boiler combustion system during the variable working conditions, guide the operators to adjust the combustion more accurately, and effectively reduce the ineffective energy consumption in the adjustment process. The proposal and application of this method laid the foundation for the construction of smart power plants. Full article
Show Figures

Figure 1

Article
Attention-Focused Machine Learning Method to Provide the Stochastic Load Forecasts Needed by Electric Utilities for the Evolving Electrical Distribution System
Energies 2023, 16(15), 5661; https://doi.org/10.3390/en16155661 - 27 Jul 2023
Viewed by 452
Abstract
Greater variation in electrical load should be expected in the future due to the increasing penetration of electric vehicles, photovoltaics, storage, and other technologies. The adoption of these technologies will vary by area and time, and if not identified early and managed by [...] Read more.
Greater variation in electrical load should be expected in the future due to the increasing penetration of electric vehicles, photovoltaics, storage, and other technologies. The adoption of these technologies will vary by area and time, and if not identified early and managed by electric utilities, these new customer needs could result in power quality, reliability, and protection issues. Furthermore, comprehensively studying the uncertainty and variation in the load on circuit elements over periods of several months has the potential to increase the efficient use of traditional resources, non-wires alternatives, and microgrids to better serve customers. To increase the understanding of electrical load, the authors propose a multistep, attention-focused, and efficient machine learning process to provide probabilistic forecasts of distribution transformer load for several months into the future. The method uses the solar irradiance, temperature, dew point, time of day, and other features to achieve up to an 86% coefficient of determination (R2). Full article
Show Figures

Figure 1

Article
Optimal Data-Driven Modelling of a Microbial Fuel Cell
Energies 2023, 16(12), 4740; https://doi.org/10.3390/en16124740 - 15 Jun 2023
Cited by 1 | Viewed by 556
Abstract
Microbial fuel cells (MFCs) are biocells that use microorganisms as biocatalysts to break down organic matter and convert chemical energy into electrical energy. Presently, the application of MFCs as alternative energy sources is limited by their low power attribute. Optimization of MFCs is [...] Read more.
Microbial fuel cells (MFCs) are biocells that use microorganisms as biocatalysts to break down organic matter and convert chemical energy into electrical energy. Presently, the application of MFCs as alternative energy sources is limited by their low power attribute. Optimization of MFCs is very important to harness optimum energy. In this study, we develop optimal data-driven models for a typical MFC synthesized from polymethylmethacrylate and two graphite plates using machine learning algorithms including support vector regression (SVR), artificial neural networks (ANNs), Gaussian process regression (GPR), and ensemble learners. Power density and output voltage were modeled from two different datasets; the first dataset has current density and anolyte concentration as features, while the second dataset considers current density and chemical oxygen demand as features. Hyperparameter optimization was carried out on each of the considered machine learning-based models using Bayesian optimization, grid search, and random search to arrive at the best possible models for the MFC. A model was derived for power density and output voltage having 99% accuracy on testing set evaluations. Full article
Show Figures

Figure 1

Systematic Review
Systematic Review of Recommendation Systems for Course Selection
Mach. Learn. Knowl. Extr. 2023, 5(2), 560-596; https://doi.org/10.3390/make5020033 - 06 Jun 2023
Viewed by 1688
Abstract
Course recommender systems play an increasingly pivotal role in the educational landscape, driving personalization and informed decision-making for students. However, these systems face significant challenges, including managing a large and dynamic decision space and addressing the cold start problem for new students. This [...] Read more.
Course recommender systems play an increasingly pivotal role in the educational landscape, driving personalization and informed decision-making for students. However, these systems face significant challenges, including managing a large and dynamic decision space and addressing the cold start problem for new students. This article endeavors to provide a comprehensive review and background to fully understand recent research on course recommender systems and their impact on learning. We present a detailed summary of empirical data supporting the use of these systems in educational strategic planning. We examined case studies conducted over the previous six years (2017–2022), with a focus on 35 key studies selected from 1938 academic papers found using the CADIMA tool. This systematic literature review (SLR) assesses various recommender system methodologies used to suggest course selection tracks, aiming to determine the most effective evidence-based approach. Full article
Show Figures

Figure 1

Article
Spare Parts Demand Forecasting Method Based on Intermittent Feature Adaptation
Entropy 2023, 25(5), 764; https://doi.org/10.3390/e25050764 - 07 May 2023
Cited by 1 | Viewed by 1022
Abstract
The demand for complex equipment aftermarket parts is mostly sporadic, showing typical intermittent characteristics as a whole, resulting in the evolution law of a single demand series having insufficient information, which restricts the prediction effect of existing methods. To solve this problem, this [...] Read more.
The demand for complex equipment aftermarket parts is mostly sporadic, showing typical intermittent characteristics as a whole, resulting in the evolution law of a single demand series having insufficient information, which restricts the prediction effect of existing methods. To solve this problem, this paper proposes a prediction method of intermittent feature adaptation from the perspective of transfer learning. Firstly, to extract the intermittent features of the demand series, an intermittent time series domain partitioning algorithm is proposed by mining the demand occurrence time and demand interval information in the series, then constructing the metrics, and using a hierarchical clustering algorithm to divide all the series into different sub-source domains. Secondly, the intermittent and temporal characteristics of the sequence are combined to construct a weight vector, and the learning of common information between domains is accomplished by weighting the distance of the output features of each cycle between domains. Finally, experiments are conducted on the actual after-sales datasets of two complex equipment manufacturing enterprises. Compared with various prediction methods, the method in this paper can effectively predict future demand trends, and the prediction’s stability and accuracy are significantly improved. Full article
Show Figures

Figure 1

Article
A Reinforcement Learning Approach for Scheduling Problems with Improved Generalization through Order Swapping
Mach. Learn. Knowl. Extr. 2023, 5(2), 418-430; https://doi.org/10.3390/make5020025 - 29 Apr 2023
Cited by 1 | Viewed by 1218
Abstract
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy, but also for increasing the overall efficiency. Among the different job scheduling problems, the Job Shop Scheduling Problem (JSSP) [...] Read more.
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy, but also for increasing the overall efficiency. Among the different job scheduling problems, the Job Shop Scheduling Problem (JSSP) is addressed in this work. JSSP falls into the category of NP-hard Combinatorial Optimization Problem (COP), in which solving the problem through exhaustive search becomes unfeasible. Simple heuristics such as First-In, First-Out, Largest Processing Time First and metaheuristics such as taboo search are often adopted to solve the problem by truncating the search space. The viability of the methods becomes inefficient for large problem sizes as it is either far from the optimum or time consuming. In recent years, the research towards using Deep Reinforcement Learning (DRL) to solve COPs has gained interest and has shown promising results in terms of solution quality and computational efficiency. In this work, we provide an novel approach to solve the JSSP examining the objectives generalization and solution effectiveness using DRL. In particular, we employ the Proximal Policy Optimization (PPO) algorithm that adopts the policy-gradient paradigm that is found to perform well in the constrained dispatching of jobs. We incorporated a new method called Order Swapping Mechanism (OSM) in the environment to achieve better generalized learning of the problem. The performance of the presented approach is analyzed in depth by using a set of available benchmark instances and comparing our results with the work of other groups. Full article
Show Figures

Figure 1

Article
An Optimal Scheduling Method for an Integrated Energy System Based on an Improved k-Means Clustering Algorithm
Energies 2023, 16(9), 3713; https://doi.org/10.3390/en16093713 - 26 Apr 2023
Viewed by 635
Abstract
This study proposes an optimal scheduling method for complex integrated energy systems. The proposed method employs a heuristic algorithm to maximize its energy, economy, and environment indices and optimize the system operation plan. It uses the k-means combined with box plots (Imk-means) to [...] Read more.
This study proposes an optimal scheduling method for complex integrated energy systems. The proposed method employs a heuristic algorithm to maximize its energy, economy, and environment indices and optimize the system operation plan. It uses the k-means combined with box plots (Imk-means) to improve the convergence speed of the heuristic algorithm by forming its initial conditions. Thus, the optimization scheduling speed is enhanced. First of all, considering the system source and load factors, the Imk-means is presented to find the typical and extreme days in a historical optimization dataset. The output results for these typical and extreme days can represent common and abnormal optimization results, respectively. Thus, based on the representative historical data, a traditional heuristic algorithm with an initial solution set, such as the genetic algorithm, can be accelerated greatly. Secondly, the initial populations of the genetic algorithm are dispersed at the historical outputs of the typical and extreme days, and many random populations are supplemented simultaneously. Finally, the improved genetic algorithm performs the solution process faster to find optimal results and can possibly prevent the results from falling into local optima. A case study was conducted to verify the effectiveness of the proposed method. The results show that the proposed method can decrease the running time by up to 89.29% at the most, and 72.68% on average, compared with the traditional genetic algorithm. Meanwhile, the proposed method has a slightly increased optimization index, indicating no loss of optimization accuracy during acceleration. It can also indicate that the proposed method does not fall into local optima, as it has fewer iterations. Full article
Show Figures

Figure 1

Article
Reviving the Dynamics of Attacked Reservoir Computers
Entropy 2023, 25(3), 515; https://doi.org/10.3390/e25030515 - 16 Mar 2023
Cited by 2 | Viewed by 843
Abstract
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique [...] Read more.
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique in clinical practice, we propose a novel framework of reviving attacked reservoir computers, consisting of several strategies direct at different types of attacks on structure by adjusting only a minor fraction of edges in the reservoir. Numerical experiments demonstrate the efficacy and broad applicability of the framework and reveal inspiring insights into the mechanisms. This work provides a vehicle to improve the robustness of reservoir computers and can be generalized to broader types of neural networks. Full article
Show Figures

Figure 1

Article
Implicit Solutions of the Electrical Impedance Tomography Inverse Problem in the Continuous Domain with Deep Neural Networks
Entropy 2023, 25(3), 493; https://doi.org/10.3390/e25030493 - 13 Mar 2023
Viewed by 889
Abstract
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical [...] Read more.
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical instability is still a major hurdle due to many factors, including the discretization error of the problem. Furthermore, most algorithms with good performance are relatively time consuming and do not allow real-time applications. In our approach, the goal is to separate the unknown conductivity into two regions, namely the region of homogeneous background conductivity and the region of non-homogeneous conductivity. Therefore, we pose and solve the problem of shape reconstruction using machine learning. We propose a novel and simple jet intriguing neural network architecture capable of solving the EIT inverse problem. It addresses previous difficulties, including instability, and is easily adaptable to other ill-posed coefficient inverse problems. That is, the proposed model estimates the probability for a point of whether the conductivity belongs to the background region or to the non-homogeneous region on the continuous space RdΩ with d{2,3}. The proposed model does not make assumptions about the forward model and allows for solving the inverse problem in real time. The proposed machine learning approach for shape reconstruction is also used to improve gradient-based methods for estimating the unknown conductivity. In this paper, we propose a piece-wise constant reconstruction method that is novel in the inverse problem setting but inspired by recent approaches from the 3D vision community. We also extend this method into a novel constrained reconstruction method. We present extensive numerical experiments to show the performance of the architecture and compare the proposed method with previous analytic algorithms, mainly the monotonicity-based shape reconstruction algorithm and iteratively regularized Gauss–Newton method. Full article
Show Figures

Figure 1

Article
Feature Selection Using New Version of V-Shaped Transfer Function for Salp Swarm Algorithm in Sentiment Analysis
Computation 2023, 11(3), 56; https://doi.org/10.3390/computation11030056 - 08 Mar 2023
Cited by 5 | Viewed by 1193
Abstract
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a [...] Read more.
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a binary version of a metaheuristic optimization algorithm based on Swarm Intelligence, namely the Salp Swarm Algorithm (SSA), as feature selection in sentiment analysis. (2) Methods: Significant feature subsets were selected using the SSA. Transfer functions with various types of the form S-TF, V-TF, X-TF, U-TF, Z-TF, and the new type V-TF with a simpler mathematical formula are used as a binary version approach to enable search agents to move in the search space. The stages of the study include data pre-processing, feature selection using SSA-TF and other conventional feature selection methods, modelling using K-Nearest Neighbor (KNN), Support Vector Machine, and Naïve Bayes, and model evaluation. (3) Results: The results showed an increase of 31.55% to the best accuracy of 80.95% for the KNN model using SSA-based New V-TF. (4) Conclusions: We have found that SSA-New V3-TF is a feature selection method with the highest accuracy and less runtime compared to other algorithms in sentiment analysis. Full article
Show Figures

Figure 1

Article
Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology
Entropy 2023, 25(3), 450; https://doi.org/10.3390/e25030450 - 04 Mar 2023
Viewed by 722
Abstract
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To [...] Read more.
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To solve this problem, in this study, we first introduced a Poisson-like randomness strategy and an enhanced randomness strategy to improve the remora optimization algorithm (ROA), i.e., the PROA. Simultaneously, its convergence speed and robustness were verified in different dimensions using the CEC benchmark function. The convergence speed of 67.5–74% of the results is better than the ROA, and the robustness results of 66.67–75% are better than those of the ROA. Second, a deployment model was established for the large-scale measurement field to obtain the maximum visible area of the target to be measured. Finally, the PROA was used as the optimizer to solve optimal deployment planning; the performance of the PROA was verified by simulation analysis. In the case of six stations, the maximum visible area of the PROA reaches 83.02%, which is 18.07% higher than that of the ROA. Compared with the traditional method, this model shortens the deployment time and calculates the overall accessibility, which is of practical significance for improving assembly efficiency in large-size measurement field environments. Full article
Show Figures

Figure 1

Review
Introduction of Materials Genome Technology and Its Applications in the Field of Biomedical Materials
Materials 2023, 16(5), 1906; https://doi.org/10.3390/ma16051906 - 25 Feb 2023
Cited by 1 | Viewed by 941
Abstract
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this [...] Read more.
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this paper, the basic concepts involved in the MGT are introduced, and the applications of MGT in the R&D of metallic, inorganic non-metallic, polymeric, and composite biomedical materials are summarized; in view of the existing limitations of MGT for R&D of biomedical materials, potential strategies are proposed on the establishment and management of material databases, the upgrading of high-throughput experimental technology, the construction of data mining prediction platforms, and the training of relevant materials talents. In the end, future trend of MGT for R&D of biomedical materials is proposed. Full article
Show Figures

Figure 1

Article
Parametric Analysis of Thick FGM Plates Based on 3D Thermo-Elasticity Theory: A Proper Generalized Decomposition Approach
Materials 2023, 16(4), 1753; https://doi.org/10.3390/ma16041753 - 20 Feb 2023
Cited by 2 | Viewed by 944
Abstract
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal [...] Read more.
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal environment. In the present work, material gradation is considered in one, two and three directions, and 3D heat transfer and theory of elasticity equations are solved to have an accurate temperature field and be able to consider all shear deformations. A parametric analysis of FGM materials is especially useful in material design and optimization. In the PGD technique, the field variables are separated to a set of univariate functions, and the high-dimensional governing equations reduce to a set of one-dimensional problems. Due to the curse of dimensionality, solving a high-dimensional parametric problem is considerably more computationally intensive than solving a set of one-dimensional problems. Therefore, the PGD makes it possible to handle high-dimensional problems efficiently. In the present work, some sample examples in 4D and 5D computational spaces are solved, and the results are presented. Full article
Show Figures

Figure 1

Article
Quick Estimate of Information Decomposition for Text Style Transfer
Entropy 2023, 25(2), 322; https://doi.org/10.3390/e25020322 - 10 Feb 2023
Cited by 1 | Viewed by 950
Abstract
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to [...] Read more.
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to assess the quality of information decomposition for latent representations in the context of style transfer. Experimenting with several state-of-the-art models, we demonstrate that such estimates could be used as a fast and straightforward health check for the models instead of more laborious empirical experiments. Full article
Show Figures

Figure 1

Review
A Survey on the Application of Machine Learning in Turbulent Flow Simulations
Energies 2023, 16(4), 1755; https://doi.org/10.3390/en16041755 - 09 Feb 2023
Viewed by 1294
Abstract
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such [...] Read more.
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such as atmospheric phenomena and engineering calculations made hand computation impractical. The dawn of the computer age also marked the beginning of computational fluid mechanics and their subsequent popularization made computational fluid dynamics one of the common tools used in science and engineering. From the beginning, however, the method has faced a trade-off between accuracy and computational requirements. The purpose of this work is to examine how the results of recent advances in machine learning can be applied to further develop the seemingly plateaued method. Examples of applying this method to improve various types of computational flow simulations, both by increasing the accuracy of the results obtained and reducing calculation times, have been reviewed in the paper as well as the effectiveness of the methods presented, the chances of their acceptance by industry, including possible obstacles, and potential directions for their development. One can observe an evolution of solutions from simple determination of closure coefficients through to more advanced attempts to use machine learning as an alternative to the classical methods of solving differential equations on which computational fluid dynamics is based up to turbulence models built solely from neural networks. A continuation of these three trends may lead to at least a partial replacement of Navier–Stokes-based computational fluid dynamics by machine-learning-based solutions. Full article
Show Figures

Figure 1

Article
Predicting Terrestrial Heat Flow in North China Using Multiple Geological and Geophysical Datasets Based on Machine Learning Method
Energies 2023, 16(4), 1620; https://doi.org/10.3390/en16041620 - 06 Feb 2023
Viewed by 724
Abstract
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed [...] Read more.
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed to study the regional geothermal setting. This research is significant in order to provide a new reliable map of terrestrial heat flow for the subsequent development of geothermal resources. The Gradient Boosted Regression Tree (GBRT) prediction model used in this paper is devoted to solving the problem of an insufficient number of heat flow observations in North China. It considers the geological and geophysical information in the region by training the sample data using 12 kinds of geological and geophysical features. Finally, a robust GBRT prediction model was obtained. The performance of the GBRT method was evaluated by comparing it with the kriging interpolation, the minimum curvature interpolation, and the 3D interpolation algorithm through the prediction performance analysis. Based on the GBRT prediction model, a new heat flow map with a resolution of 0.25°×0.25° was proposed, which depicted the terrestrial heat flow distribution in the study area in a more detailed and reasonable way than the interpolation results. The high heat flow values were mostly concentrated in the northeastern boundary of the Tibet Plateau, with a few scattered and small-scale high heat flow areas in the southeastern part of the North China Craton (NCC) adjacent to the Pacific Ocean. The low heat flow values were mainly resolved in the northern part of the Trans-North China Orogenic belt (TNCO) and the southmost part of the NCC. By comparing the predicted heat flow map with the plate tectonics, the olivine-Mg#, and the hot spring distribution in North China, we found that the GBRT could obtain a reliable result under the constraint of geological and geophysical information in regions with scarce and unevenly distributed heat flow observations. Full article
Show Figures

Figure 1

Article
Mobile Application for Tomato Plant Leaf Disease Detection Using a Dense Convolutional Network Architecture
Computation 2023, 11(2), 20; https://doi.org/10.3390/computation11020020 - 31 Jan 2023
Cited by 2 | Viewed by 2096
Abstract
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be [...] Read more.
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be preserved with the aid of computer technology. It can identify diseases in tomato plant leaves. An algorithm for deep learning with a DenseNet architecture was implemented in this study. Multiple hyperparameter tests were conducted to determine the optimal model. Using two hidden layers, a DenseNet trainable layer on dense block 5, and a dropout rate of 0.4, the optimal model was constructed. The 10-fold cross-validation evaluation of the model yielded an accuracy value of 95.7 percent and an F1-score of 95.4 percent. To recognize tomato plant leaves, the model with the best assessment results was implemented in a mobile application. Full article
Show Figures

Figure 1

Article
Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace
Materials 2023, 16(3), 1164; https://doi.org/10.3390/ma16031164 - 30 Jan 2023
Viewed by 916
Abstract
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to [...] Read more.
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to the complex structure, long transportation distance of raw materials, and high cost. To overcome these issues, the brazier-type gasification and carbonization furnace is designed to carry out dry distillation, anaerobic carbonization and have a high carbonization rate under high-temperature conditions. To improve the operation and maintenance efficiency, we formulate the operation of the brazier-type gasification and carbonization furnace as a dynamic multi-objective optimization problem (DMOP). Firstly, we analyze the dynamic factors in the work process of the brazier-type gasification and carbonization furnace, such as the equipment capacity, the operating conditions, and the biomass treated by the furnace. Afterward, we select the biochar yield and carbon monoxide emission as the dynamic objectives and model the DMOP. Finally, we apply three dynamic multiobjective evolutionary algorithms to solve the optimization problem so as to verify the effectiveness of the dynamic optimization approach in the gasification and carbonization furnace. Full article
Show Figures

Figure 1

Article
Optimizing Automated Trading Systems with Deep Reinforcement Learning
Algorithms 2023, 16(1), 23; https://doi.org/10.3390/a16010023 - 01 Jan 2023
Cited by 2 | Viewed by 2376
Abstract
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency [...] Read more.
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency market. Considering two simple objective functions, cumulative return and Sharpe ratio, the results showed that Deep Reinforcement Learning approach with Double Deep Q-Network setting and the Bayesian Optimization approach can provide positive average returns. Among the settings being studied, Double Deep Q-Network setting with Sharpe ratio as reward function is the best Q-learning trading system. With a daily trading goal, the system shows outperformed results in terms of cumulative return, volatility and execution time when compared with the Bayesian Optimization approach. This helps traders to make quick and efficient decisions with the latest information from the market. In long-term trading, Bayesian Optimization is a method of parameter optimization that brings higher profits. Deep Reinforcement Learning provides solutions to the high-dimensional problem of Bayesian Optimization in upcoming studies such as optimizing portfolios with multiple assets and diverse trading strategies. Full article
Show Figures

Figure 1

Article
Improved Anomaly Detection by Using the Attention-Based Isolation Forest
Algorithms 2023, 16(1), 19; https://doi.org/10.3390/a16010019 - 28 Dec 2022
Viewed by 1804
Abstract
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly [...] Read more.
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly detection problem. The main idea underlying the modification is the assignment of attention weights to each path of trees with learnable parameters depending on the instances and trees themselves. Huber’s contamination model is proposed to be used to define the attention weights and their parameters. As a result, the attention weights are linearly dependent on learnable attention parameters that are trained by solving a standard linear or quadratic optimization problem. ABIForest can be viewed as the first modification of the isolation forest to incorporate an attention mechanism in a simple way without applying gradient-based algorithms. Numerical experiments with synthetic and real datasets illustrate that the results of ABIForest outperform those of other methods. The code of the proposed algorithms has been made available. Full article
Show Figures

Figure 1

Article
Forecasting for Chaotic Time Series Based on GRP-lstmGAN Model: Application to Temperature Series of Rotary Kiln
Entropy 2023, 25(1), 52; https://doi.org/10.3390/e25010052 - 27 Dec 2022
Viewed by 798
Abstract
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is [...] Read more.
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is improved by analyzing the essential characteristics of time series. However, the existing prediction methods of chaotic time series cannot fully consider the local and global characteristics of time series at the same time. Therefore, in this study, the global recurrence plot (GRP)-based generative adversarial network (GAN) and the long short-term memory (LSTM) combination method, named GRP-lstmGAN, are proposed, which can effectively display important information about time scales. First, the data is subjected to a series of pre-processing operations, including data smoothing. Then, transforming one-dimensional time series into two-dimensional images by GRP makes full use of the global and local information of time series. Finally, the combination of LSTM and improves GAN models for temperature time series prediction. The experimental results show that our model is better than comparison models. Full article
Show Figures

Figure 1

Article
Cluster-Based Structural Redundancy Identification for Neural Network Compression
Entropy 2023, 25(1), 9; https://doi.org/10.3390/e25010009 - 21 Dec 2022
Viewed by 1024
Abstract
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing [...] Read more.
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing unimportant filters. This paper reconsiders model pruning from the perspective of structural redundancy, claiming that identifying functionally similar filters plays a more important role, and proposes a model pruning framework for clustering-based redundancy identification. First, we perform cluster analysis on the filters of each layer to generate similar sets with different functions. We then propose a criterion for identifying redundant filters within similar sets. Finally, we propose a pruning scheme that automatically determines the pruning rate of each layer. Extensive experiments on various benchmark network architectures and datasets demonstrate the effectiveness of our proposed framework. Full article
Show Figures

Figure 1

Article
A Dual-Population-Based NSGA-III for Constrained Many-Objective Optimization
Entropy 2023, 25(1), 13; https://doi.org/10.3390/e25010013 - 21 Dec 2022
Viewed by 904
Abstract
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting [...] Read more.
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting objectives and constraints. This might lead to the population being stuck at some locally optimal or locally feasible regions. To alleviate the above challenges, we proposed a dual-population-based NSGA-III, named DP-NSGA-III, where the two populations exchange information through the offspring. The main population based on the NSGA-III solves CMaOPs and the auxiliary populations with different environment selection ignore the constraints. In addition, we designed an ε-constraint handling method in combination with NSGA-III, aiming to exploit the excellent infeasible solutions in the main population. The proposed DP-NSGA-III is compared with four state-of-the-art CMaOEAs on a series of benchmark problems. The experimental results show that the proposed evolutionary algorithm is highly competitive in solving CMaOPs. Full article
Show Figures

Figure 1

Article
Initial Solution Generation and Diversified Variable Picking in Local Search for (Weighted) Partial MaxSAT
Entropy 2022, 24(12), 1846; https://doi.org/10.3390/e24121846 - 18 Dec 2022
Viewed by 879
Abstract
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the [...] Read more.
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the (W)PMS problem. Our initialization strategy is based on a novel definition of variables’ structural entropy, and it aims to generate a solution that is close to a high-quality feasible one. Then, our diversification strategy picks a variable in two possible ways, depending on a parameter: continuing to pick variables with the best benefits or focusing on a clause with the greatest penalty and then selecting variables probabilistically. Based on these strategies, we developed a local search solver dubbed ImSATLike, as well as a hybrid solver ImSATLike-TT, and experimental results on (weighted) partial MaxSAT instances in recent MaxSAT Evaluations show that they outperform or have nearly the same performances as state-of-the-art local search and hybrid competitors, respectively, in general. Furthermore, we carried out experiments to confirm the individual impacts of each proposed strategy. Full article
Article
Advanced Spatial and Technological Aggregation Scheme for Energy System Models
Energies 2022, 15(24), 9517; https://doi.org/10.3390/en15249517 - 15 Dec 2022
Cited by 1 | Viewed by 912
Abstract
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system [...] Read more.
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system models. To that end, a novel two-step aggregation scheme is introduced. First, model regions are spatially aggregated to obtain a reduced region set. The aggregation is based on model parameters such as VRES time series, capacities, etc. In addition, spatial contiguity of regions is considered. Next, technological aggregation is performed on each VRES, in each region, based on their time series. The aggregations’ impact on accuracy and complexity of a cost-optimal, European energy system model is analyzed. The model is aggregated to obtain different combinations of numbers of regions and VRES types. Results are benchmarked against an initial resolution of 96 regions, with 68 VRES types in each. System cost deviates significantly when lower numbers of regions and/or VRES types are considered. As spatial and technological resolutions increase, the cost fluctuates initially and stabilizes eventually, approaching the benchmark. Optimal combination is determined based on an acceptable cost deviation of <5% and the point of stabilization. A total of 33 regions with 38 VRES types in each is deemed optimal. Here, the cost is underestimated by 4.42%, but the run time is reduced by 92.95%. Full article
Show Figures

Figure 1

Article
Curriculum Reinforcement Learning Based on K-Fold Cross Validation
Entropy 2022, 24(12), 1787; https://doi.org/10.3390/e24121787 - 06 Dec 2022
Cited by 2 | Viewed by 1355
Abstract
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert [...] Read more.
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert experience and a single network, which has the problems of difficult curriculum task ranking and slow convergence speed. In this paper, we propose a curriculum reinforcement learning method based on K-Fold Cross Validation that can estimate the relativity score of task curriculum difficulty. Drawing lessons from the human concept of curriculum learning from easy to difficult, this method divides automatic curriculum learning into a curriculum difficulty assessment stage and a curriculum sorting stage. Through parallel training of the teacher model and cross-evaluation of task sample difficulty, the method can better sequence curriculum learning tasks. Finally, simulation comparison experiments were carried out in two types of multi-agent experimental environments. The experimental results show that the automatic curriculum learning method based on K-Fold cross-validation can improve the training speed of the MADDPG algorithm, and at the same time has a certain generality for multi-agent deep reinforcement learning algorithm based on the replay buffer mechanism. Full article
Show Figures

Figure 1

Article
Applications of Virtual Machine Using Multi-Objective Optimization Scheduling Algorithm for Improving CPU Utilization and Energy Efficiency in Cloud Computing
Energies 2022, 15(23), 9164; https://doi.org/10.3390/en15239164 - 02 Dec 2022
Cited by 2 | Viewed by 953
Abstract
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of [...] Read more.
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of scheduled gaps, the total execution time in a workflow can be decreased by placing uncompleted tasks in the gaps through approximate computations. In the current research, a novel approach based on multi-objective optimization is utilized with CloudSim as the underlying simulator in order to evaluate the VM (virtual machine) allocation performance. In this study, we determine the energy consumption, CPU utilization, and number of executed instructions in each scheduling interval for complex VM scheduling solutions to improve the energy efficiency and reduce the execution time. Finally, based on the simulation results and analyses, all of the tested parameters are simulated and evaluated with a proper validation in CloudSim. Based on the results, multi-objective PSO (particle swarm optimization) optimization can achieve better and more efficient effects for different parameters than multi-objective GA (genetic algorithm) optimization can. Full article
Show Figures

Figure 1

Article
Improved Black Widow Spider Optimization Algorithm Integrating Multiple Strategies
Entropy 2022, 24(11), 1640; https://doi.org/10.3390/e24111640 - 11 Nov 2022
Cited by 7 | Viewed by 1087
Abstract
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced [...] Read more.
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced to initialize the population to ensure the diversity of the algorithm at the initial stage. Then, the sine cosine strategy is introduced to perturb the individuals during iteration to improve the global search ability of the algorithm. In addition, the elite opposition-based learning strategy is introduced to improve convergence speed of algorithm. Finally, the mutation method of the differential evolution algorithm is integrated to reorganize the individuals with poor fitness values. Through the analysis of the optimization results of 13 benchmark test functions and a part of CEC2017 test functions, the effectiveness and rationality of each improved strategy are verified. Moreover, it shows that the proposed algorithm has significant improvement in solution accuracy, performance and convergence speed compared with other algorithms. Furthermore, the IBWOA algorithm is used to solve six practical constrained engineering problems. The results show that the IBWOA has excellent optimization ability and scalability. Full article
Show Figures

Figure 1

Article
An HGA-LSTM-Based Intelligent Model for Ore Pulp Density in the Hydrometallurgical Process
Materials 2022, 15(21), 7586; https://doi.org/10.3390/ma15217586 - 28 Oct 2022
Cited by 1 | Viewed by 720
Abstract
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming [...] Read more.
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming at the problem of accurately measuring the feed ore pulp density, we proposed a new intelligent model based on the long short-term memory (LSTM) and hybrid genetic algorithm (HGA). Specifically, the HGA refers to a novel optimization search algorithm model that can optimize the hyperparameters and improve the modeling performance of the LSTM. Finally, the proposed intelligent model was successfully applied to an actual thickener case in China. The intelligent model prediction results demonstrated that the hybrid model outperformed other models and satisfied the measurement accuracy requirements in the factory well. Full article
Show Figures

Figure 1

Article
Research on Joint Resource Allocation for Multibeam Satellite Based on Metaheuristic Algorithms
Entropy 2022, 24(11), 1536; https://doi.org/10.3390/e24111536 - 26 Oct 2022
Viewed by 813
Abstract
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal [...] Read more.
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal solutions has become a hot topic in research and has the potential to be explored further. In particular, the treatment of invalid solutions is the key to algorithm performance. At present, the unused bandwidth allocation (UBA) method is commonly used to address the bandwidth constraint in the DRM problem. However, this method reduces the algorithm’s flexibility in the solution space, diminishes the quality of the optimized solution, and increases the computational complexity. In this paper, we propose a bandwidth constraint handling approach based on the non-dominated beam coding (NDBC) method, which can eliminate the bandwidth overlap constraint in the algorithm’s population evolution and achieve complete bandwidth flexibility in order to increase the quality of the optimal solution while decreasing the computational complexity. We develop a generic application architecture for metaheuristic algorithms using the NDBC method and successfully apply it to four typical algorithms. The results indicate that NDBC can enhance the quality of the optimized solution by 9–33% while simultaneously reducing computational complexity by 9–21%. Full article
Show Figures

Figure 1

Article
Model NOx, SO2 Emissions Concentration and Thermal Efficiency of CFBB Based on a Hyper-Parameter Self-Optimized Broad Learning System
Energies 2022, 15(20), 7700; https://doi.org/10.3390/en15207700 - 18 Oct 2022
Cited by 1 | Viewed by 854
Abstract
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the [...] Read more.
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the boiler combustion process makes it difficult to model it using traditional mathematical methods. In this paper, a kind of hyper-parameter self-optimized broad learning system by a sparrow search algorithm is proposed to model the NOx, SO2 emissions concentration and thermal efficiency of a circulation fluidized bed boiler (CFBB). A broad learning system (BLS) is a novel neural network algorithm, which shows good performance in multidimensional feature learning. However, the BLS has several hyper-parameters to be set in a wide range, so that the optimal combination between hyper-parameters is difficult to determine. This paper uses a sparrow search algorithm (SSA) to select the optimal hyper-parameters combination of the broad learning system, namely as SSA-BLS. To verify the effectiveness of SSA-BLS, ten benchmark regression datasets are applied. Experimental results show that SSA-BLS obtains good regression accuracy and model stability. Additionally, the proposed SSA-BLS is applied to model the combustion process parameters of a 330 MW circulating fluidized bed boiler. Experimental results reveal that SSA-BLS can establish the accurate prediction models for thermal efficiency, NOx emission concentration and SO2 emission concentration, separately. Altogether, SSA-BLS is an effective modelling method. Full article
Show Figures

Graphical abstract

Article
A Pattern-Recognizer Artificial Neural Network for the Prediction of New Crescent Visibility in Iraq
Computation 2022, 10(10), 186; https://doi.org/10.3390/computation10100186 - 13 Oct 2022
Cited by 4 | Viewed by 1346
Abstract
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories [...] Read more.
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories use interpolation and extrapolation techniques to identify sighting regions through such data. In this study, a pattern recognizer artificial neural network was trained to distinguish between visibility regions. Essential parameters of crescent moon sighting were collected from moon sight datasets and used to build an intelligent system of pattern recognition to predict the crescent sight conditions. The proposed ANN learned the datasets with an accuracy of more than 72% in comparison to the actual observational results. ANN simulation gives a clear insight into three crescent moon visibility regions: invisible (I), probably visible (P), and certainly visible (V). The proposed ANN is suitable for building lunar calendars, so it was used to build a four-year calendar on the horizon of Baghdad. The built calendar was compared with the official Hijri calendar in Iraq. Full article
Show Figures

Figure 1

Article
Shear Strength Prediction Model for RC Exterior Joints Using Gene Expression Programming
Materials 2022, 15(20), 7076; https://doi.org/10.3390/ma15207076 - 12 Oct 2022
Viewed by 887
Abstract
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant [...] Read more.
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant input parameters using 253 tests were extracted from the literature to carry out a knowledge analysis of GEP. The database was further divided into two portions: 152 exterior joint experiments with joint transverse reinforcements and 101 unreinforced joint specimens. Moreover, the effects of different material and geometric factors (usually ignored in the available models) were incorporated into the proposed models. These factors are beam and column geometries, concrete and steel material properties, longitudinal and shear reinforcements, and column axial loads. Statistical analysis and comparisons with previously proposed analytical and empirical models indicate a high degree of accuracy of the proposed models, rendering them ideal for practical application. Full article
Show Figures

Figure 1

Article
Analysis of Vulnerability on Weighted Power Networks under Line Breakdowns
Entropy 2022, 24(10), 1449; https://doi.org/10.3390/e24101449 - 11 Oct 2022
Cited by 3 | Viewed by 854
Abstract
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted [...] Read more.
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted situations in the real world. This paper investigates the vulnerability of weighted power networks. Firstly, we propose a more practical capacity model to investigate the cascading failure of weighted power networks under different attack strategies. Results show that the smaller threshold of the capacity parameter can enhance the vulnerability of weighted power networks. Furthermore, a weighted electrical cyber-physical interdependent network is developed to study the vulnerability and failure dynamics of the entire power network. We perform simulations in the IEEE 118 Bus case to evaluate the vulnerability under various coupling schemes and different attack strategies. Simulation results show that heavier loads increase the likelihood of blackouts and that different coupling strategies play a crucial role in the cascading failure performance. Full article
Show Figures

Figure 1

Article
An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning
Entropy 2022, 24(10), 1377; https://doi.org/10.3390/e24101377 - 27 Sep 2022
Cited by 1 | Viewed by 954
Abstract
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical [...] Read more.
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack. Full article
Show Figures

Figure 1

Article
Dynamic Programming BN Structure Learning Algorithm Integrating Double Constraints under Small Sample Condition
Entropy 2022, 24(10), 1354; https://doi.org/10.3390/e24101354 - 24 Sep 2022
Viewed by 834
Abstract
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this [...] Read more.
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this paper studies the planning mode and connotation of dynamic programming, restricts its process with edge and path constraints, and proposes a dynamic programming BN structure learning algorithm with double constraints under small sample conditions. The algorithm uses double constraints to limit the planning process of dynamic programming and reduces the planning space. Then, it uses double constraints to limit the selection of the optimal parent node to ensure that the optimal structure conforms to prior knowledge. Finally, the integrating prior-knowledge method and the non-integrating prior-knowledge method are simulated and compared. The simulation results verify the effectiveness of the method proposed and prove that the integrating prior knowledge can significantly improve the efficiency and accuracy of BN structure learning. Full article
Show Figures

Figure 1

Review
Optimization-Based High-Frequency Circuit Miniaturization through Implicit and Explicit Constraint Handling: Recent Advances
Energies 2022, 15(19), 6955; https://doi.org/10.3390/en15196955 - 22 Sep 2022
Cited by 1 | Viewed by 761
Abstract
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. [...] Read more.
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. On the practical side, the challenges related to handling design constraints are aggravated by the high cost of system evaluation, normally requiring full-wave electromagnetic (EM) analysis. Some of these issues can be alleviated by implicit constraint handling using the penalty function approach. Yet, its performance depends on the arrangement of the penalty factors, necessitating a costly trial-and-error procedure to identify their optimum setup. A workaround is offered by the recently proposed algorithms with automatic adaptation of the penalty factors using different adjustment schemes. However, these intricate strategies require a continuous problem-dependent adaptation of the penalty function throughout the entire optimization process. Alternative methodologies have been proposed by taking an explicit approach to handle the inequality constraints, along with correction-based control over equality conditions, the combination of which proves to be demonstrably competitive for some miniaturization tasks. Nevertheless, optimization-based miniaturization, whether using implicit or explicit constraint handling, remains a computationally expensive task. A reliable way of reducing the aforementioned costs is the incorporation of multi-resolution EM fidelity models into the miniaturization procedure. Therein, the principal operation is based on the simultaneous monitoring of factors such as quality of the constraint satisfaction, as well as algorithm convergence status. This paper provides an overview of the abovementioned size-reduction algorithms, in which theoretical considerations are illustrated using a number of antenna and microwave circuit case studies. Full article
Show Figures

Figure 1

Article
Sensor Fusion for Occupancy Estimation: A Study Using Multiple Lecture Rooms in a Complex Building
Mach. Learn. Knowl. Extr. 2022, 4(3), 803-813; https://doi.org/10.3390/make4030039 - 16 Sep 2022
Viewed by 1513
Abstract
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room [...] Read more.
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room occupancy is a major factor. The estimation can benefit visitor management systems in real time, but can also be predictive of room reservation strategies. By using different terminal and non-terminal sensors in different premises of varying sizes, this paper aims to estimate room occupancy. In the process, the proposed models are trained with different combinations of rooms in training and testing datasets to examine distinctions in the infrastructure of the considered building. The results indicate that the estimation benefits from a combination of different sensors. Additionally, it is found that a model should be trained with data from every room in a building and cannot be transferred to other rooms. Full article
Show Figures

Figure 1

Article
A Period-Based Neural Network Algorithm for Predicting Building Energy Consumption of District Heating
Energies 2022, 15(17), 6338; https://doi.org/10.3390/en15176338 - 30 Aug 2022
Viewed by 903
Abstract
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a [...] Read more.
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a period-based neural network (PBNN) is proposed to predict building energy consumption. The main innovation of PBNN is the introduction of a new data structure, which is a time-discontinuous sliding window. The sliding window consists of the past 24 h, 24 h for the same period last week, and 24 h for the same period the previous year. When predicting the building energy consumption for the next 1 h, 12 h, and 24 h, the prediction errors of the PBNN are 2.30%, 3.47%, and 3.66% lower than those of the traditional sliding window PBNN (TSW-PBNN), respectively. The training time of PBNN is approximately half that of TSW-PBNN. The time-discontinuous sliding window reduces the energy consumption prediction error and neural network model training time. Full article
Show Figures

Figure 1

Article
Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
Entropy 2022, 24(9), 1213; https://doi.org/10.3390/e24091213 - 30 Aug 2022
Viewed by 910
Abstract
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because [...] Read more.
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because the random walk procedure is based on symmetric node similarity and fixed probability distribution, the sampled vertices’ sequences may lose local community structure information; secondly, because the feature extraction capacity of the shallow neural language model is limited, they can only extract the local structural features of networks; and thirdly, these approaches require specially designed mechanisms for different downstream tasks to integrate vertex attributes of various types. We conducted an in-depth investigation to address the aforementioned issues and propose a novel general NRL framework called dynamic structure and vertex attribute fusion network embedding, which firstly defines an asymmetric similarity and h-hop dynamic random walk strategy to guide the random walk process to preserve the network’s local community structure in walked vertex sequences. Next, we train a self-attention-based sequence prediction model on the walked vertex sequences to simultaneously learn the vertices’ local and global structural features. Finally, we introduce an attributes-driven Laplacian space optimization to converge the process of structural feature extraction and attribute feature extraction. The proposed approach is exhaustively evaluated by means of node visualization and classification on multiple benchmark datasets, and achieves superior results compared to baseline approaches. Full article
Show Figures

Figure 1

Article
Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition
Entropy 2022, 24(8), 1025; https://doi.org/10.3390/e24081025 - 26 Jul 2022
Cited by 4 | Viewed by 1297
Abstract
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network [...] Read more.
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods. Full article
Show Figures

Figure 1

Article
Optimal Performance and Application for Seagull Optimization Algorithm Using a Hybrid Strategy
Entropy 2022, 24(7), 973; https://doi.org/10.3390/e24070973 - 14 Jul 2022
Cited by 1 | Viewed by 1024
Abstract
This paper aims to present a novel hybrid algorithm named SPSOA to address problems of low search capability and easy to fall into local optimization of seagull optimization algorithm. Firstly, the Sobol sequence in the low-discrepancy sequences is used to initialize the seagull [...] Read more.
This paper aims to present a novel hybrid algorithm named SPSOA to address problems of low search capability and easy to fall into local optimization of seagull optimization algorithm. Firstly, the Sobol sequence in the low-discrepancy sequences is used to initialize the seagull population to enhance the population’s diversity and ergodicity. Then, inspired by the sigmoid function, a new parameter is designed to strengthen the ability of the algorithm to coordinate early exploration and late development. Finally, the particle swarm optimization learning strategy is introduced into the seagull position updating method to improve the ability of the algorithm to jump out of local optimization. Through the simulation comparison with other algorithms on 12 benchmark test functions from different angles, the experimental results show that SPSOA is superior to other algorithms in stability, convergence accuracy, and speed. In engineering applications, SPSOA is applied to blind source separation of mixed images. The experimental results show that SPSOA can successfully realize the blind source separation of noisy mixed images and achieve higher separation performance than the compared algorithms. Full article
Show Figures

Figure 1

Review
Emergent Intelligence in Generalized Pure Quantum Systems
Computation 2022, 10(6), 88; https://doi.org/10.3390/computation10060088 - 31 May 2022
Cited by 1 | Viewed by 1315
Abstract
This paper presents the generalized information system theory, which is enlarged into pure quantum systems using wave probability functions. The novelty of this approach is based on analogies with electrical circuits and quantum physics. Information power was chosen as the relevant parameter, which [...] Read more.
This paper presents the generalized information system theory, which is enlarged into pure quantum systems using wave probability functions. The novelty of this approach is based on analogies with electrical circuits and quantum physics. Information power was chosen as the relevant parameter, which guarantees the balance of both components—information flow and information content. Next, the principles of quantum resonance between individual information components, which can lead to emergent behavior, are analyzed. For such a system, adding more and more probabilistic information elements can lead to better convergence of the whole to the resulting trajectory due to phase parameters. The paper also offers an original interpretation of information “source–recipient” or “resource–demand” models, including not yet implemented “unused resources” and “unmet demands”. Finally, possible applications of these principles are shown in several examples from the quantum gyrator to the hypothetical possibility of explaining some properties of the consciousness. Full article
Show Figures

Figure 1

Article
Bullet Frangibility Factor Quantification by Using Explicit Dynamic Simulation Method
Computation 2022, 10(6), 79; https://doi.org/10.3390/computation10060079 - 24 May 2022
Viewed by 1772
Abstract
Frangible bullets have a unique property that disintegrates into fragments upon hitting a hard target or obstacle. This peculiar ability to become fragments after impact is called frangibility. In this study, frangibility testing was carried out theoretically via modeling using the explicit dynamics [...] Read more.
Frangible bullets have a unique property that disintegrates into fragments upon hitting a hard target or obstacle. This peculiar ability to become fragments after impact is called frangibility. In this study, frangibility testing was carried out theoretically via modeling using the explicit dynamics method with ANSYS Autodyn solver integrated by ANSYS Workbench software. This paper aims to analyze frangibility through two main factors: material properties and projectile design. The results show the scattering and remaining bullet fragments after impact. According to the modeling results, the frangibility factor values are 9.34 and 10.79, respectively. Based on the frangibility factor, errors based on the frangibility factor by comparing the experimental results and simulations for AMMO 1 and AMMO 2 are 10.5% and 1.09%. Based on simulation results, the AMMO 2 design bullet scattering pattern shows several scattering particles more than the AMMO 1 design, with the furthest distance scattering AMMO 1 and AMMO 2 bullets being 1.01 m and 2658 m. Full article
Show Figures

Figure 1

Article
Improved Shear Strength Prediction Model of Steel Fiber Reinforced Concrete Beams by Adopting Gene Expression Programming
Materials 2022, 15(11), 3758; https://doi.org/10.3390/ma15113758 - 24 May 2022
Cited by 8 | Viewed by 1638
Abstract
In this study, an artificial intelligence tool called gene expression programming (GEP) has been successfully applied to develop an empirical model that can predict the shear strength of steel fiber reinforced concrete beams. The proposed genetic model incorporates all the influencing parameters such [...] Read more.
In this study, an artificial intelligence tool called gene expression programming (GEP) has been successfully applied to develop an empirical model that can predict the shear strength of steel fiber reinforced concrete beams. The proposed genetic model incorporates all the influencing parameters such as the geometric properties of the beam, the concrete compressive strength, the shear span-to-depth ratio, and the mechanical and material properties of steel fiber. Existing empirical models ignore the tensile strength of steel fibers, which exercise a strong influence on the crack propagation of concrete matrix, thereby affecting the beam shear strength. To overcome this limitation, an improved and robust empirical model is proposed herein that incorporates the fiber tensile strength along with the other influencing factors. For this purpose, an extensive experimental database subjected to four-point loading is constructed comprising results of 488 tests drawn from the literature. The data are divided based on different shapes (hooked or straight fiber) and the tensile strength of steel fiber. The empirical model is developed using this experimental database and statistically compared with previously established empirical equations. This comparison indicates that the proposed model shows significant improvement in predicting the shear strength of steel fiber reinforced concrete beams, thus substantiating the important role of fiber tensile strength. Full article
Show Figures

Figure 1

Article
A Tailored Pricing Strategy for Different Types of Users in Hybrid Carsharing Systems
Algorithms 2022, 15(5), 172; https://doi.org/10.3390/a15050172 - 20 May 2022
Cited by 3 | Viewed by 1403
Abstract
Considering the characteristics of different types of users in hybrid carsharing systems, in which sharing autonomous vehicles (SAVs) and conventional sharing cars (CSCs) coexist, a tailored pricing strategy (TPS) is proposed to maximize the operator’s profit and minimize all users’ costs. The fleet [...] Read more.
Considering the characteristics of different types of users in hybrid carsharing systems, in which sharing autonomous vehicles (SAVs) and conventional sharing cars (CSCs) coexist, a tailored pricing strategy (TPS) is proposed to maximize the operator’s profit and minimize all users’ costs. The fleet sizes and sizes of SAVs’ stations are also determined simultaneously. A bi-objective nonlinear programming model is established, and a genetic algorithm is applied to solve it. Based on the operational data in Lanzhou, China, carsharing users are clustered into three types. They are loyal users, losing users, and potential users, respectively. Results show the application of the TPS can help the operator increase profit and attract more users. The loyal users are assigned the highest price, while they still contribute the most to the operator’s profit with the highest number of carsharing trips. The losing users and potential users are comparable in terms of the number of trips, while the latter still makes more profit. Full article
Show Figures

Figure 1

Article
Predicting Box-Office Markets with Machine Learning Methods
Entropy 2022, 24(5), 711; https://doi.org/10.3390/e24050711 - 16 May 2022
Cited by 3 | Viewed by 1506
Abstract
The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of [...] Read more.
The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of eight methods with diverse combinations of economic factors. Specifically, we achieved a prediction performance of the relative root mean squared error of 0.056 in the US and of 0.183 in China for the two case studies of movie markets in time-series forecasting experiments from 2013 to 2016. We concluded that the support-vector-machine-based method using gross domestic product reached the best prediction performance and satisfies the easily available information of economic factors. The computational experiments and comparison studies provided evidence for the effectiveness and advantages of our proposed prediction strategy. In the validation process of the predicted total box-office markets in 2017, the error rates were 0.044 in the US and 0.066 in China. In the consecutive predictions of nationwide box-office markets in 2018 and 2019, the mean relative absolute percentage errors achieved were 0.041 and 0.035 in the US and China, respectively. The precise predictions, both in the training and validation data, demonstrate the efficiency and versatility of our proposed method. Full article
Show Figures

Figure 1

Article
PSO Optimized Active Disturbance Rejection Control for Aircraft Anti-Skid Braking System
Algorithms 2022, 15(5), 158; https://doi.org/10.3390/a15050158 - 10 May 2022
Cited by 2 | Viewed by 1421
Abstract
A high-quality and secure touchdown run for an aircraft is essential for economic, operational, and strategic reasons. The shortest viable touchdown run without any skidding requires variable braking pressure to manage the friction between the road surface and braking tire at all times. [...] Read more.
A high-quality and secure touchdown run for an aircraft is essential for economic, operational, and strategic reasons. The shortest viable touchdown run without any skidding requires variable braking pressure to manage the friction between the road surface and braking tire at all times. Therefore, the manipulation and regulation of the anti-skid braking system (ABS) should be able to handle steady nonlinearity and undetectable disturbances and to regulate the wheel slip ratio to make sure that the braking system operates securely. This work proposes an active disturbance rejection control technique for the anti-skid braking system. The control law ensures action that is bounded and manageable, and the manipulating algorithm can ensure that the closed-loop machine works around the height factor of the secure area of the friction curve, thereby improving overall braking performance and safety. The stability of the proposed algorithm is proven primarily by means of Lyapunov-based strategies, and its effectiveness is assessed by means of simulations on a semi-physical aircraft brake simulation platform. Full article
Show Figures

Figure 1