Topic Editors

Department of Advanced Computational Methods, Faculty of Science and Technology Jan Dlugosz University in Czestochowa 13/15 Armii Krajowej Av., 42-200 Czestochowa, Poland
Dr. Yunfei Gao
Shanghai Engineering Research Center of Coal Gasification, East China University of Science and Technology, Shanghai 200237, China
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 13/15 Armii Krajowej Av., 42-200 Czestochowa, Poland
Dr. Ghulam Moeen Uddin
Department of Mechanical Engineering, University of Engineering & Technology, Lahore, Punjab 54890, Pakistan
Dr. Anna Kulakowska
Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, Armii Krajowej 13/15, 42-200 Czestochowa, Poland
Division of Advanced Computational Methods, Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, 42-200 Czestochowa, Poland
Dr. Bachil El Fil
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

Artificial Intelligence and Computational Methods: Modeling, Simulations and Optimization of Complex Systems

Abstract submission deadline
closed (30 September 2022)
Manuscript submission deadline
closed (20 October 2023)
Viewed by
130650

Topic Information

Dear Colleagues,

Due to the increasing computational capability of current data processing systems, new opportunities emerge in the modeling, simulations, and optimization of complex systems and devices. Methods that are difficult to apply, highly demanding, and time-consuming may now be considered when developing complete and sophisticated models in many areas of science and technology. The combination of computational methods and AI algorithms allows conducting multi-threaded analyses to solve advanced and interdisciplinary problems. This article collection aims to bring together research on advances in modeling, simulations, and optimization issues of complex systems. Original research, as well as review articles and short communications, with a particular focus on (but not limited to) artificial intelligence and other computational methods, are welcomed.

Prof. Dr. Jaroslaw Krzywanski
Dr. Yunfei Gao
Dr. Marcin Sosnowski
Dr. Karolina Grabowska
Dr. Dorian Skrobek
Dr. Ghulam Moeen Uddin
Dr. Anna Kulakowska
Dr. Anna Zylka
Dr. Bachil El Fil
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • artificial neural networks
  • deep learning
  • genetic and evolutionary algorithms
  • artificial immune systems
  • fuzzy logic
  • expert systems
  • bio-inspired methods
  • CFD
  • modeling
  • simulation
  • optimization
  • complex systems

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Entropy
entropy
2.7 4.7 1999 20.8 Days CHF 2600
Algorithms
algorithms
2.3 3.7 2008 15 Days CHF 1600
Computation
computation
2.2 3.3 2013 18 Days CHF 1800
Machine Learning and Knowledge Extraction
make
3.9 8.5 2019 19.9 Days CHF 1800
Energies
energies
3.2 5.5 2008 16.1 Days CHF 2600
Materials
materials
3.4 5.2 2008 13.9 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (74 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
1 pages, 124 KiB  
Correction
Correction: Zhang et al. Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace. Materials 2023, 16, 1164
by Xi Zhang, Guiyun Zhang, Dong Zhang and Liping Zhang
Materials 2024, 17(6), 1233; https://doi.org/10.3390/ma17061233 - 07 Mar 2024
Viewed by 327
Abstract
In consideration of the contributions to this work, Feng Qian unequivocally requests the removal of his name from the author list of this publication [...] Full article
33 pages, 3390 KiB  
Review
Distributed Learning in the IoT–Edge–Cloud Continuum
by Audris Arzovs, Janis Judvaitis, Krisjanis Nesenbergs and Leo Selavo
Mach. Learn. Knowl. Extr. 2024, 6(1), 283-315; https://doi.org/10.3390/make6010015 - 01 Feb 2024
Viewed by 1614
Abstract
The goal of the IoT–Edge–Cloud Continuum approach is to distribute computation and data loads across multiple types of devices taking advantage of the different strengths of each, such as proximity to the data source, data access, or computing power, while mitigating potential weaknesses. [...] Read more.
The goal of the IoT–Edge–Cloud Continuum approach is to distribute computation and data loads across multiple types of devices taking advantage of the different strengths of each, such as proximity to the data source, data access, or computing power, while mitigating potential weaknesses. Most current machine learning operations are currently concentrated on remote high-performance computing devices, such as the cloud, which leads to challenges related to latency, privacy, and other inefficiencies. Distributed learning approaches can address these issues by enabling the distribution of machine learning operations throughout the IoT–Edge–Cloud Continuum by incorporating Edge and even IoT layers into machine learning operations more directly. Approaches like transfer learning could help to transfer the knowledge from more performant IoT–Edge–Cloud Continuum layers to more resource-constrained devices, e.g., IoT. The implementation of these methods in machine learning operations, including the related data handling security and privacy approaches, is challenging and actively being researched. In this article the distributed learning and transfer learning domains are researched, focusing on security, robustness, and privacy aspects, and their potential usage in the IoT–Edge–Cloud Continuum, including research on tools to use for implementing these methods. To achieve this, we have reviewed 145 sources and described the relevant methods as well as their relevant attack vectors and provided suggestions on mitigation. Full article
Show Figures

Figure 1

31 pages, 626 KiB  
Review
Economic Dispatch Optimization Strategies and Problem Formulation: A Comprehensive Review
by Fatemeh Marzbani and Akmal Abdelfatah
Energies 2024, 17(3), 550; https://doi.org/10.3390/en17030550 - 23 Jan 2024
Viewed by 1353
Abstract
Economic Dispatch Problems (EDP) refer to the process of determining the power output of generation units such that the electricity demand of the system is satisfied at a minimum cost while technical and operational constraints of the system are satisfied. This procedure is [...] Read more.
Economic Dispatch Problems (EDP) refer to the process of determining the power output of generation units such that the electricity demand of the system is satisfied at a minimum cost while technical and operational constraints of the system are satisfied. This procedure is vital in the efficient energy management of electricity networks since it can ensure the reliable and efficient operation of power systems. As power systems transition from conventional to modern ones, new components and constraints are introduced to power systems, making the EDP increasingly complex. This highlights the importance of developing advanced optimization techniques that can efficiently handle these new complexities to ensure optimal operation and cost-effectiveness of power systems. This review paper provides a comprehensive exploration of the EDP, encompassing its mathematical formulation and the examination of commonly used problem formulation techniques, including single and multi-objective optimization methods. It also explores the progression of paradigms in economic dispatch, tracing the journey from traditional methods to contemporary strategies in power system management. The paper categorizes the commonly utilized techniques for solving EDP into four groups: conventional mathematical approaches, uncertainty modelling methods, artificial intelligence-driven techniques, and hybrid algorithms. It identifies critical research gaps, a predominant focus on single-case studies that limit the generalizability of findings, and the challenge of comparing research due to arbitrary system choices and formulation variations. The present paper calls for the implementation of standardized evaluation criteria and the inclusion of a diverse range of case studies to enhance the practicality of optimization techniques in the field. Full article
Show Figures

Figure 1

19 pages, 8547 KiB  
Article
Stepwise Identification Method of Thermal Load for Box Structure Based on Deep Learning
by Hongze Du, Qi Xu, Lizhe Jiang, Yufeng Bu, Wenbo Li and Jun Yan
Materials 2024, 17(2), 357; https://doi.org/10.3390/ma17020357 - 10 Jan 2024
Viewed by 483
Abstract
Accurate and rapid thermal load identification based on limited measurement points is crucial for spacecraft on-orbit monitoring. This study proposes a stepwise identification method based on deep learning for identifying structural thermal loads that efficiently map the local responses and overall thermal load [...] Read more.
Accurate and rapid thermal load identification based on limited measurement points is crucial for spacecraft on-orbit monitoring. This study proposes a stepwise identification method based on deep learning for identifying structural thermal loads that efficiently map the local responses and overall thermal load of a box structure. To determine the location and magnitude of the thermal load accurately, the proposed method segments a structure into several subregions and applies a cascade of deep learning models to gradually reduce the solution domain. The generalization ability of the model is significantly enhanced by the inclusion of boundary conditions in the deep learning models. In this study, a large simulated dataset was generated by varying the load application position and intensity for each sample. The input variables encompass a small set of structural displacements, while the outputs include parameters related to the thermal load, such as the position and magnitude of the load. Ablation experiments are conducted to validate the effectiveness of this approach. The results show that this method reduces the identification error of the thermal load parameters by more than 45% compared with a single deep learning network. The proposed method holds promise for optimizing the design and analysis of spacecraft structures, contributing to improved performance and reliability in future space missions. Full article
Show Figures

Figure 1

13 pages, 6702 KiB  
Communication
Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm
by Yi Wang, Yating Xu, Tianjian Li, Tao Zhang and Jian Zou
Algorithms 2023, 16(12), 574; https://doi.org/10.3390/a16120574 - 18 Dec 2023
Viewed by 1226
Abstract
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses [...] Read more.
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring. Full article
Show Figures

Figure 1

19 pages, 8847 KiB  
Review
Review of Approaches to Minimise the Cost of Simulation-Based Optimisation for Liquid Composite Moulding Processes
by Boon Xian Chai, Boris Eisenbart, Mostafa Nikzad, Bronwyn Fox, Yuqi Wang, Kyaw Hlaing Bwar and Kaiyu Zhang
Materials 2023, 16(24), 7580; https://doi.org/10.3390/ma16247580 - 09 Dec 2023
Cited by 1 | Viewed by 816
Abstract
The utilisation of numerical process simulation has greatly facilitated the challenging task of liquid composite moulding (LCM) process optimisation, providing ease of solution evaluation at a significantly reduced cost compared to complete reliance on physical prototyping. However, due to the process complexity, such [...] Read more.
The utilisation of numerical process simulation has greatly facilitated the challenging task of liquid composite moulding (LCM) process optimisation, providing ease of solution evaluation at a significantly reduced cost compared to complete reliance on physical prototyping. However, due to the process complexity, such process simulation is still considerably expensive at present. In this paper, cost-saving approaches to minimising the computational cost of simulation-based optimisation for LCM processes are compiled and discussed. Their specific applicability, efficacy, and suitability for various optimisation/moulding scenarios are extensively explored in detail. The comprehensive analysation and assimilation of their operation alongside applicability for the problem domain of interest are accomplished in this paper to further complement and contribute to future simulation-based optimisation capabilities for composite moulding processes. The importance of balancing the cost-accuracy trade-off is also repeatedly emphasised, allowing for substantial cost reductions while ensuring a desirable level of optimization reliability. Full article
Show Figures

Figure 1

23 pages, 2752 KiB  
Article
A Stochastic Load Forecasting Approach to Prevent Transformer Failures and Power Quality Issues Amid the Evolving Electrical Demands Facing Utilities
by John O’Donnell and Wencong Su
Energies 2023, 16(21), 7251; https://doi.org/10.3390/en16217251 - 25 Oct 2023
Viewed by 686
Abstract
New technologies, such as electric vehicles, rooftop solar, and behind-the-meter storage, will lead to increased variation in electrical load, and the location and time of the penetration of these technologies are uncertain. Power quality, reliability, and protection issues can be the result if [...] Read more.
New technologies, such as electric vehicles, rooftop solar, and behind-the-meter storage, will lead to increased variation in electrical load, and the location and time of the penetration of these technologies are uncertain. Power quality, reliability, and protection issues can be the result if electric utilities do not consider the probability of load scenarios that have not yet occurred. The authors’ approach to addressing these concerns started with collecting the electrical load data for an expansive and diverse set of distribution transformers. This provided approximately two-and-a-half years of data that were used to develop new methods that will enable engineers to address emerging issues. The efficacy of the methods was then assessed with a real-world test dataset that was not used in the development of the new methods. This resulted in an approach to efficiently generate stochastic electrical load forecasts for elements of distribution circuits. Methods are also described that use those forecasts for engineering analysis that predict the likelihood of distribution transformer failures and power quality events. 100% of the transformers identified as most likely to fail either did fail or identified a data correction opportunity. The accuracy of the power quality results was 92% while allowing for a balance between measures of efficiency and customer satisfaction. Full article
Show Figures

Figure 1

24 pages, 1788 KiB  
Article
Multi-Objective Optimization of Thin-Walled Composite Axisymmetric Structures Using Neural Surrogate Models and Genetic Algorithms
by Bartosz Miller and Leonard Ziemiański
Materials 2023, 16(20), 6794; https://doi.org/10.3390/ma16206794 - 20 Oct 2023
Cited by 1 | Viewed by 899
Abstract
Composite shells find diverse applications across industries due to their high strength-to-weight ratio and tailored properties. Optimizing parameters such as matrix-reinforcement ratio and orientation of the reinforcement is crucial for achieving the desired performance metrics. Stochastic optimization, specifically genetic algorithms, offer solutions, yet [...] Read more.
Composite shells find diverse applications across industries due to their high strength-to-weight ratio and tailored properties. Optimizing parameters such as matrix-reinforcement ratio and orientation of the reinforcement is crucial for achieving the desired performance metrics. Stochastic optimization, specifically genetic algorithms, offer solutions, yet their computational intensity hinders widespread use. Surrogate models, employing neural networks, emerge as efficient alternatives by approximating objective functions and bypassing costly computations. This study investigates surrogate models in multi-objective optimization of composite shells. It incorporates deep neural networks to approximate relationships between input parameters and key metrics, enabling exploration of design possibilities. Incorporating mode shape identification enhances accuracy, especially in multi-criteria optimization. Employing network ensembles strengthens reliability by mitigating model weaknesses. Efficiency analysis assesses required computations, managing the trade-off between cost and accuracy. Considering complex input parameters and comparing against the Monte Carlo approach further demonstrates the methodology’s efficacy. This work showcases the successful integration of network ensembles employed as surrogate models and mode shape identification, enhancing multi-objective optimization in engineering applications. The approach’s efficiency in handling intricate designs and enhancing accuracy has broad implications for optimization methodologies. Full article
Show Figures

Figure 1

23 pages, 1188 KiB  
Article
Entropy-Aware Time-Varying Graph Neural Networks with Generalized Temporal Hawkes Process: Dynamic Link Prediction in the Presence of Node Addition and Deletion
by Bahareh Najafi, Saeedeh Parsaeefard and Alberto Leon-Garcia
Mach. Learn. Knowl. Extr. 2023, 5(4), 1359-1381; https://doi.org/10.3390/make5040069 - 04 Oct 2023
Viewed by 1409
Abstract
This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, [...] Read more.
This paper addresses the problem of learning temporal graph representations, which capture the changing nature of complex evolving networks. Existing approaches mainly focus on adding new nodes and edges to capture dynamic graph structures. However, to achieve more accurate representation of graph evolution, we consider both the addition and deletion of nodes and edges as events. These events occur at irregular time scales and are modeled using temporal point processes. Our goal is to learn the conditional intensity function of the temporal point process to investigate the influence of deletion events on node representation learning for link-level prediction. We incorporate network entropy, a measure of node and edge significance, to capture the effect of node deletion and edge removal in our framework. Additionally, we leveraged the characteristics of a generalized temporal Hawkes process, which considers the inhibitory effects of events where past occurrences can reduce future intensity. This framework enables dynamic representation learning by effectively modeling both addition and deletion events in the temporal graph. To evaluate our approach, we utilize autonomous system graphs, a family of inhomogeneous sparse graphs with instances of node and edge additions and deletions, in a link prediction task. By integrating these enhancements into our framework, we improve the accuracy of dynamic link prediction and enable better understanding of the dynamic evolution of complex networks. Full article
Show Figures

Figure 1

19 pages, 2041 KiB  
Article
Predicting the Long-Term Dependencies in Time Series Using Recurrent Artificial Neural Networks
by Cristian Ubal, Gustavo Di-Giorgi, Javier E. Contreras-Reyes and Rodrigo Salas
Mach. Learn. Knowl. Extr. 2023, 5(4), 1340-1358; https://doi.org/10.3390/make5040068 - 02 Oct 2023
Cited by 1 | Viewed by 2195
Abstract
Long-term dependence is an essential feature for the predictability of time series. Estimating the parameter that describes long memory is essential to describing the behavior of time series models. However, most long memory estimation methods assume that this parameter has a constant value [...] Read more.
Long-term dependence is an essential feature for the predictability of time series. Estimating the parameter that describes long memory is essential to describing the behavior of time series models. However, most long memory estimation methods assume that this parameter has a constant value throughout the time series, and do not consider that the parameter may change over time. In this work, we propose an automated methodology that combines the estimation methodologies of the fractional differentiation parameter (and/or Hurst parameter) with its application to Recurrent Neural Networks (RNNs) in order for said networks to learn and predict long memory dependencies from information obtained in nonlinear time series. The proposal combines three methods that allow for better approximation in the prediction of the values of the parameters for each one of the windows obtained, using Recurrent Neural Networks as an adaptive method to learn and predict the dependencies of long memory in Time Series. For the RNNs, we have evaluated four different architectures: the Simple RNN, LSTM, the BiLSTM, and the GRU. These models are built from blocks with gates controlling the cell state and memory. We have evaluated the proposed approach using both synthetic and real-world data sets. We have simulated ARFIMA models for the synthetic data to generate several time series by varying the fractional differentiation parameter. We have evaluated the proposed approach using synthetic and real datasets using Whittle’s estimates of the Hurst parameter classically obtained in each window. We have simulated ARFIMA models in such a way that the synthetic data generate several time series by varying the fractional differentiation parameter. The real-world IPSA stock option index and Tree Ringtime series datasets were evaluated. All of the results show that the proposed approach can predict the Hurst exponent with good performance by selecting the optimal window size and overlap change. Full article
Show Figures

Figure 1

15 pages, 8626 KiB  
Article
Optimization and Prediction of Different Building Forms for Thermal Energy Performance in the Hot Climate of Cairo Using Genetic Algorithm and Machine Learning
by Amany Khalil, Anas M. Hosney Lila and Nouran Ashraf
Computation 2023, 11(10), 192; https://doi.org/10.3390/computation11100192 - 02 Oct 2023
Cited by 2 | Viewed by 1520
Abstract
The climate change crisis has resulted in the need to use sustainable methods in architectural design, including building form and orientation decisions that can save a significant amount of energy consumed by a building. Several previous studies have optimized building form and envelope [...] Read more.
The climate change crisis has resulted in the need to use sustainable methods in architectural design, including building form and orientation decisions that can save a significant amount of energy consumed by a building. Several previous studies have optimized building form and envelope for energy performance, but the isolated effect of varieties of possible architectural forms for a specific climate has not been fully investigated. This paper proposes four novel office building form generation methods (the polygon that varies between pentagon and decagon; the pixels that are complex cubic forms; the letters including H, L, U, T; cross and complex cubic forms; and the round family including circular and oval forms) and evaluates their annual thermal energy use intensity (EUI) for Cairo (hot climate). Results demonstrated the applicability of the proposed methods in enhancing the energy performance of the new forms in comparison to the base case. The results of the optimizations are compared together, and the four families are discussed in reference to their different architectural aspects and performance. Scatterplots are developed for the round family (highest performance) to test the impact of each dynamic parameter on EUI. The round family optimization process takes a noticeably high calculation time in comparison to other families. Therefore, an Artificial Neural Network (ANN) prediction model is developed for the round family after simulating 1726 iterations. Training of 1200 configurations is used to predict annual EUI for the remaining 526 iterations. The ANN predicted values are compared against the trained to determine the time saved and accuracy. Full article
Show Figures

Figure 1

26 pages, 466 KiB  
Article
Iterated Clique Reductions in Vertex Weighted Coloring for Large Sparse Graphs
by Yi Fan, Zaijun Zhang, Quan Yu, Yongxuan Lai, Kaile Su, Yiyuan Wang, Shiwei Pan and Longin Jan Latecki
Entropy 2023, 25(10), 1376; https://doi.org/10.3390/e25101376 - 24 Sep 2023
Viewed by 899
Abstract
The Minimum Vertex Weighted Coloring (MinVWC) problem is an important generalization of the classic Minimum Vertex Coloring (MinVC) problem which is NP-hard. Given a simple undirected graph G=(V,E), the MinVC problem is to find a coloring [...] Read more.
The Minimum Vertex Weighted Coloring (MinVWC) problem is an important generalization of the classic Minimum Vertex Coloring (MinVC) problem which is NP-hard. Given a simple undirected graph G=(V,E), the MinVC problem is to find a coloring s.t. any pair of adjacent vertices are assigned different colors and the number of colors used is minimized. The MinVWC problem associates each vertex with a positive weight and defines the weight of a color to be the weight of its heaviest vertices, then the goal is the find a coloring that minimizes the sum of weights over all colors. Among various approaches, reduction is an effective one. It tries to obtain a subgraph whose optimal solutions can conveniently be extended into optimal ones for the whole graph, without costly branching. In this paper, we propose a reduction algorithm based on maximal clique enumeration. More specifically our algorithm utilizes a certain proportion of maximal cliques and obtains lower bounds in order to perform reductions. It alternates between clique sampling and graph reductions and consists of three successive procedures: promising clique reductions, better bound reductions and post reductions. Experimental results show that our algorithm returns considerably smaller subgraphs for numerous large benchmark graphs, compared to the most recent method named RedLS. Also, we evaluate individual impacts and some practical properties of our algorithm. Furthermore, we have a theorem which indicates that the reduction effects of our algorithm are equivalent to that of a counterpart which enumerates all maximal cliques in the whole graph if the run time is sufficiently long. Full article
Show Figures

Figure 1

27 pages, 1127 KiB  
Article
Automatic Genre Identification for Robust Enrichment of Massive Text Collections: Investigation of Classification Methods in the Era of Large Language Models
by Taja Kuzman, Igor Mozetič and Nikola Ljubešić
Mach. Learn. Knowl. Extr. 2023, 5(3), 1149-1175; https://doi.org/10.3390/make5030059 - 12 Sep 2023
Viewed by 2118
Abstract
Massive text collections are the backbone of large language models, the main ingredient of the current significant progress in artificial intelligence. However, as these collections are mostly collected using automatic methods, researchers have few insights into what types of texts they consist of. [...] Read more.
Massive text collections are the backbone of large language models, the main ingredient of the current significant progress in artificial intelligence. However, as these collections are mostly collected using automatic methods, researchers have few insights into what types of texts they consist of. Automatic genre identification is a text classification task that enriches texts with genre labels, such as promotional and legal, providing meaningful insights into the composition of these large text collections. In this paper, we evaluate machine learning approaches for the genre identification task based on their generalizability across different datasets to assess which model is the most suitable for the downstream task of enriching large web corpora with genre information. We train and test multiple fine-tuned BERT-like Transformer-based models and show that merging different genre-annotated datasets yields superior results. Moreover, we explore the zero-shot capabilities of large GPT Transformer models in this task and discuss the advantages and disadvantages of the zero-shot approach. We also publish the best-performing fine-tuned model that enables automatic genre annotation in multiple languages. In addition, to promote further research in this area, we plan to share, upon request, a new benchmark for automatic genre annotation, ensuring the non-exposure of the latest large language models. Full article
Show Figures

Figure 1

18 pages, 6558 KiB  
Article
Artificial Neural Networks for Predicting the Diameter of Electrospun Nanofibers Synthesized from Solutions/Emulsions of Biopolymers and Oils
by Guadalupe Cuahuizo-Huitzil, Octavio Olivares-Xometl, María Eugenia Castro, Paulina Arellanes-Lozada, Francisco J. Meléndez-Bustamante, Ivo Humberto Pineda Torres, Claudia Santacruz-Vázquez and Verónica Santacruz-Vázquez
Materials 2023, 16(16), 5720; https://doi.org/10.3390/ma16165720 - 21 Aug 2023
Viewed by 868
Abstract
In the present work, different configurations of nt iartificial neural networks (ANNs) were analyzed in order to predict the experimental diameter of nanofibers produced by means of the electrospinning process and employing polyvinyl alcohol (PVA), PVA/chitosan (CS) and PVA/aloe vera (Av) solutions. [...] Read more.
In the present work, different configurations of nt iartificial neural networks (ANNs) were analyzed in order to predict the experimental diameter of nanofibers produced by means of the electrospinning process and employing polyvinyl alcohol (PVA), PVA/chitosan (CS) and PVA/aloe vera (Av) solutions. In addition, gelatin type A (GT)/alpha-tocopherol (α-TOC), PVA/olive oil (OO), PVA/orange essential oil (OEO), and PVA/anise oil (AO) emulsions were used. The experimental diameters of the nanofibers electrospun from the different tested systems were obtained using scanning electron microscopy (SEM) and ranged from 93.52 nm to 352.1 nm. Of the three studied ANNs, the one that displayed the best prediction results was the one with three hidden layers with the flow rate, voltage, viscosity, and conductivity variables. The calculation error between the experimental and calculated diameters was 3.79%. Additionally, the correlation coefficient (R2) was identified as a function of the ANN configuration, obtaining values of 0.96, 0.98, and 0.98 for one, two, and three hidden layer(s), respectively. It was found that an ANN configuration having more than three hidden layers did not improve the prediction of the experimental diameter of synthesized nanofibers. Full article
Show Figures

Figure 1

16 pages, 2395 KiB  
Review
Physical and Mathematical Models of Micro-Explosions: Achievements and Directions of Improvement
by Dmitrii V. Antonov, Roman M. Fedorenko, Leonid S. Yanovskiy and Pavel A. Strizhak
Energies 2023, 16(16), 6034; https://doi.org/10.3390/en16166034 - 17 Aug 2023
Cited by 3 | Viewed by 1051
Abstract
The environmental, economic, and energy problems of the modern world motivate the development of alternative fuel technologies. Multifuel technology can help reduce the carbon footprint and waste from the raw materials sector as well as slow down the depletion of energy resources. However, [...] Read more.
The environmental, economic, and energy problems of the modern world motivate the development of alternative fuel technologies. Multifuel technology can help reduce the carbon footprint and waste from the raw materials sector as well as slow down the depletion of energy resources. However, there are limitations to the active use of multifuel mixtures in real power plants and engines because they are difficult to spray in combustion chambers and require secondary atomization. Droplet micro-explosion seems the most promising secondary atomization technology in terms of its integral characteristics. This review paper outlines the most interesting approaches to modeling micro-explosions using in-house computer codes and commercial software packages. A physical model of a droplet micro-explosion based on experimental data was analyzed to highlight the schemes and mathematical expressions describing the critical conditions of parent droplet atomization. Approaches are presented that can predict the number, sizes, velocities, and trajectories of emerging child droplets. We also list the empirical data necessary for developing advanced fragmentation models. Finally, we outline the main growth areas for micro-explosion models catering for the needs of spray technology. Full article
Show Figures

Figure 1

31 pages, 3978 KiB  
Article
Identifying the Regions of a Space with the Self-Parameterized Recursively Assessed Decomposition Algorithm (SPRADA)
by Dylan Molinié, Kurosh Madani, Véronique Amarger and Abdennasser Chebira
Mach. Learn. Knowl. Extr. 2023, 5(3), 979-1009; https://doi.org/10.3390/make5030051 - 04 Aug 2023
Viewed by 1235
Abstract
This paper introduces a non-parametric methodology based on classical unsupervised clustering techniques to automatically identify the main regions of a space, without requiring the objective number of clusters, so as to identify the major regular states of unknown industrial systems. Indeed, useful knowledge [...] Read more.
This paper introduces a non-parametric methodology based on classical unsupervised clustering techniques to automatically identify the main regions of a space, without requiring the objective number of clusters, so as to identify the major regular states of unknown industrial systems. Indeed, useful knowledge on real industrial processes entails the identification of their regular states, and their historically encountered anomalies. Since both should form compact and salient groups of data, unsupervised clustering generally performs this task fairly accurately; however, this often requires the number of clusters upstream, knowledge which is rarely available. As such, the proposed algorithm operates a first partitioning of the space, then it estimates the integrity of the clusters, and splits them again and again until every cluster obtains an acceptable integrity; finally, a step of merging based on the clusters’ empirical distributions is performed to refine the partitioning. Applied to real industrial data obtained in the scope of a European project, this methodology proved able to automatically identify the main regular states of the system. Results show the robustness of the proposed approach in the fully-automatic and non-parametric identification of the main regions of a space, knowledge which is useful to industrial anomaly detection and behavioral modeling. Full article
Show Figures

Figure 1

23 pages, 5533 KiB  
Article
Optimization of Circulating Fluidized Bed Boiler Combustion Key Control Parameters Based on Machine Learning
by Lei Han, Lingmei Wang, Hairui Yang, Chengzhen Jia, Enlong Meng, Yushan Liu and Shaoping Yin
Energies 2023, 16(15), 5674; https://doi.org/10.3390/en16155674 - 28 Jul 2023
Cited by 1 | Viewed by 954
Abstract
During the coal-fired circulating fluidized bed unit participation in the peak regulation process of the power grid, the thermal automatic control system assists the operator to adjust the mode focusing on pollutant control and ignoring the economy so that the unit’s operating performance [...] Read more.
During the coal-fired circulating fluidized bed unit participation in the peak regulation process of the power grid, the thermal automatic control system assists the operator to adjust the mode focusing on pollutant control and ignoring the economy so that the unit’s operating performance maintains a huge potential for deep mining. The high-dimensional and coupling-related data characteristics of circulating fluidized bed boilers put forward more refined and demanding requirements for combustion optimization analysis and open-loop guidance operation. Therefore, this paper proposes a combustion optimization method that incorporates neighborhood rough set machine learning. This method first reduces the control parameters affecting multi-objective combustion optimization with the neighborhood rough set algorithm that fully considers the correlation of each variable combination and then establishes a multi-objective combustion optimization prediction model by combining the online calculation of boiler thermal efficiency. Finally, the NSGAII algorithm realizes the optimization of the control parameter setting value of the boiler combustion system. The results show that this method reduces the number of control commands involved in combustion optimization adjustment from 26 to 11. At the same time, based on the optimization results obtained by using traditional combustion optimization methods under high, medium, and medium-low load conditions, the boiler thermal efficiency increased by 0.07%, decreased by 0.02%, and increased by 0.55%, respectively, and the nitrogen oxide emission concentration decreased by 5.02 mg/Nm3, 7.77 mg/Nm3, and 7.03 mg/Nm3, respectively. The implementation of this method can help better account for the economy and pollutant discharge of the boiler combustion system during the variable working conditions, guide the operators to adjust the combustion more accurately, and effectively reduce the ineffective energy consumption in the adjustment process. The proposal and application of this method laid the foundation for the construction of smart power plants. Full article
Show Figures

Figure 1

21 pages, 8324 KiB  
Article
Attention-Focused Machine Learning Method to Provide the Stochastic Load Forecasts Needed by Electric Utilities for the Evolving Electrical Distribution System
by John O’Donnell and Wencong Su
Energies 2023, 16(15), 5661; https://doi.org/10.3390/en16155661 - 27 Jul 2023
Cited by 1 | Viewed by 1558
Abstract
Greater variation in electrical load should be expected in the future due to the increasing penetration of electric vehicles, photovoltaics, storage, and other technologies. The adoption of these technologies will vary by area and time, and if not identified early and managed by [...] Read more.
Greater variation in electrical load should be expected in the future due to the increasing penetration of electric vehicles, photovoltaics, storage, and other technologies. The adoption of these technologies will vary by area and time, and if not identified early and managed by electric utilities, these new customer needs could result in power quality, reliability, and protection issues. Furthermore, comprehensively studying the uncertainty and variation in the load on circuit elements over periods of several months has the potential to increase the efficient use of traditional resources, non-wires alternatives, and microgrids to better serve customers. To increase the understanding of electrical load, the authors propose a multistep, attention-focused, and efficient machine learning process to provide probabilistic forecasts of distribution transformer load for several months into the future. The method uses the solar irradiance, temperature, dew point, time of day, and other features to achieve up to an 86% coefficient of determination (R2). Full article
Show Figures

Figure 1

21 pages, 1765 KiB  
Article
Optimal Data-Driven Modelling of a Microbial Fuel Cell
by Mojeed Opeyemi Oyedeji, Abdullah Alharbi, Mujahed Aldhaifallah and Hegazy Rezk
Energies 2023, 16(12), 4740; https://doi.org/10.3390/en16124740 - 15 Jun 2023
Cited by 5 | Viewed by 1282
Abstract
Microbial fuel cells (MFCs) are biocells that use microorganisms as biocatalysts to break down organic matter and convert chemical energy into electrical energy. Presently, the application of MFCs as alternative energy sources is limited by their low power attribute. Optimization of MFCs is [...] Read more.
Microbial fuel cells (MFCs) are biocells that use microorganisms as biocatalysts to break down organic matter and convert chemical energy into electrical energy. Presently, the application of MFCs as alternative energy sources is limited by their low power attribute. Optimization of MFCs is very important to harness optimum energy. In this study, we develop optimal data-driven models for a typical MFC synthesized from polymethylmethacrylate and two graphite plates using machine learning algorithms including support vector regression (SVR), artificial neural networks (ANNs), Gaussian process regression (GPR), and ensemble learners. Power density and output voltage were modeled from two different datasets; the first dataset has current density and anolyte concentration as features, while the second dataset considers current density and chemical oxygen demand as features. Hyperparameter optimization was carried out on each of the considered machine learning-based models using Bayesian optimization, grid search, and random search to arrive at the best possible models for the MFC. A model was derived for power density and output voltage having 99% accuracy on testing set evaluations. Full article
Show Figures

Figure 1

37 pages, 2255 KiB  
Systematic Review
Systematic Review of Recommendation Systems for Course Selection
by Shrooq Algarni and Frederick Sheldon
Mach. Learn. Knowl. Extr. 2023, 5(2), 560-596; https://doi.org/10.3390/make5020033 - 06 Jun 2023
Cited by 2 | Viewed by 5997
Abstract
Course recommender systems play an increasingly pivotal role in the educational landscape, driving personalization and informed decision-making for students. However, these systems face significant challenges, including managing a large and dynamic decision space and addressing the cold start problem for new students. This [...] Read more.
Course recommender systems play an increasingly pivotal role in the educational landscape, driving personalization and informed decision-making for students. However, these systems face significant challenges, including managing a large and dynamic decision space and addressing the cold start problem for new students. This article endeavors to provide a comprehensive review and background to fully understand recent research on course recommender systems and their impact on learning. We present a detailed summary of empirical data supporting the use of these systems in educational strategic planning. We examined case studies conducted over the previous six years (2017–2022), with a focus on 35 key studies selected from 1938 academic papers found using the CADIMA tool. This systematic literature review (SLR) assesses various recommender system methodologies used to suggest course selection tracks, aiming to determine the most effective evidence-based approach. Full article
Show Figures

Figure 1

15 pages, 3112 KiB  
Article
Spare Parts Demand Forecasting Method Based on Intermittent Feature Adaptation
by Lilin Fan, Xia Liu, Wentao Mao, Kai Yang and Zhaoyu Song
Entropy 2023, 25(5), 764; https://doi.org/10.3390/e25050764 - 07 May 2023
Cited by 1 | Viewed by 2279
Abstract
The demand for complex equipment aftermarket parts is mostly sporadic, showing typical intermittent characteristics as a whole, resulting in the evolution law of a single demand series having insufficient information, which restricts the prediction effect of existing methods. To solve this problem, this [...] Read more.
The demand for complex equipment aftermarket parts is mostly sporadic, showing typical intermittent characteristics as a whole, resulting in the evolution law of a single demand series having insufficient information, which restricts the prediction effect of existing methods. To solve this problem, this paper proposes a prediction method of intermittent feature adaptation from the perspective of transfer learning. Firstly, to extract the intermittent features of the demand series, an intermittent time series domain partitioning algorithm is proposed by mining the demand occurrence time and demand interval information in the series, then constructing the metrics, and using a hierarchical clustering algorithm to divide all the series into different sub-source domains. Secondly, the intermittent and temporal characteristics of the sequence are combined to construct a weight vector, and the learning of common information between domains is accomplished by weighting the distance of the output features of each cycle between domains. Finally, experiments are conducted on the actual after-sales datasets of two complex equipment manufacturing enterprises. Compared with various prediction methods, the method in this paper can effectively predict future demand trends, and the prediction’s stability and accuracy are significantly improved. Full article
Show Figures

Figure 1

13 pages, 747 KiB  
Article
A Reinforcement Learning Approach for Scheduling Problems with Improved Generalization through Order Swapping
by Deepak Vivekanandan, Samuel Wirth, Patrick Karlbauer and Noah Klarmann
Mach. Learn. Knowl. Extr. 2023, 5(2), 418-430; https://doi.org/10.3390/make5020025 - 29 Apr 2023
Cited by 2 | Viewed by 2031
Abstract
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy, but also for increasing the overall efficiency. Among the different job scheduling problems, the Job Shop Scheduling Problem (JSSP) [...] Read more.
The scheduling of production resources (such as associating jobs to machines) plays a vital role for the manufacturing industry not only for saving energy, but also for increasing the overall efficiency. Among the different job scheduling problems, the Job Shop Scheduling Problem (JSSP) is addressed in this work. JSSP falls into the category of NP-hard Combinatorial Optimization Problem (COP), in which solving the problem through exhaustive search becomes unfeasible. Simple heuristics such as First-In, First-Out, Largest Processing Time First and metaheuristics such as taboo search are often adopted to solve the problem by truncating the search space. The viability of the methods becomes inefficient for large problem sizes as it is either far from the optimum or time consuming. In recent years, the research towards using Deep Reinforcement Learning (DRL) to solve COPs has gained interest and has shown promising results in terms of solution quality and computational efficiency. In this work, we provide an novel approach to solve the JSSP examining the objectives generalization and solution effectiveness using DRL. In particular, we employ the Proximal Policy Optimization (PPO) algorithm that adopts the policy-gradient paradigm that is found to perform well in the constrained dispatching of jobs. We incorporated a new method called Order Swapping Mechanism (OSM) in the environment to achieve better generalized learning of the problem. The performance of the presented approach is analyzed in depth by using a set of available benchmark instances and comparing our results with the work of other groups. Full article
Show Figures

Figure 1

22 pages, 4053 KiB  
Article
An Optimal Scheduling Method for an Integrated Energy System Based on an Improved k-Means Clustering Algorithm
by Fan Li, Jingxi Su and Bo Sun
Energies 2023, 16(9), 3713; https://doi.org/10.3390/en16093713 - 26 Apr 2023
Cited by 3 | Viewed by 1032
Abstract
This study proposes an optimal scheduling method for complex integrated energy systems. The proposed method employs a heuristic algorithm to maximize its energy, economy, and environment indices and optimize the system operation plan. It uses the k-means combined with box plots (Imk-means) to [...] Read more.
This study proposes an optimal scheduling method for complex integrated energy systems. The proposed method employs a heuristic algorithm to maximize its energy, economy, and environment indices and optimize the system operation plan. It uses the k-means combined with box plots (Imk-means) to improve the convergence speed of the heuristic algorithm by forming its initial conditions. Thus, the optimization scheduling speed is enhanced. First of all, considering the system source and load factors, the Imk-means is presented to find the typical and extreme days in a historical optimization dataset. The output results for these typical and extreme days can represent common and abnormal optimization results, respectively. Thus, based on the representative historical data, a traditional heuristic algorithm with an initial solution set, such as the genetic algorithm, can be accelerated greatly. Secondly, the initial populations of the genetic algorithm are dispersed at the historical outputs of the typical and extreme days, and many random populations are supplemented simultaneously. Finally, the improved genetic algorithm performs the solution process faster to find optimal results and can possibly prevent the results from falling into local optima. A case study was conducted to verify the effectiveness of the proposed method. The results show that the proposed method can decrease the running time by up to 89.29% at the most, and 72.68% on average, compared with the traditional genetic algorithm. Meanwhile, the proposed method has a slightly increased optimization index, indicating no loss of optimization accuracy during acceleration. It can also indicate that the proposed method does not fall into local optima, as it has fewer iterations. Full article
Show Figures

Figure 1

14 pages, 937 KiB  
Article
Reviving the Dynamics of Attacked Reservoir Computers
by Ruizhi Cao, Chun Guan, Zhongxue Gan and Siyang Leng
Entropy 2023, 25(3), 515; https://doi.org/10.3390/e25030515 - 16 Mar 2023
Cited by 2 | Viewed by 1248
Abstract
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique [...] Read more.
Physically implemented neural networks are subject to external perturbations and internal variations. Existing works focus on the adversarial attacks but seldom consider attack on the network structure and the corresponding recovery method. Inspired by the biological neural compensation mechanism and the neuromodulation technique in clinical practice, we propose a novel framework of reviving attacked reservoir computers, consisting of several strategies direct at different types of attacks on structure by adjusting only a minor fraction of edges in the reservoir. Numerical experiments demonstrate the efficacy and broad applicability of the framework and reveal inspiring insights into the mechanisms. This work provides a vehicle to improve the robustness of reservoir computers and can be generalized to broader types of neural networks. Full article
Show Figures

Figure 1

15 pages, 1140 KiB  
Article
Implicit Solutions of the Electrical Impedance Tomography Inverse Problem in the Continuous Domain with Deep Neural Networks
by Thilo Strauss and Taufiquar Khan
Entropy 2023, 25(3), 493; https://doi.org/10.3390/e25030493 - 13 Mar 2023
Viewed by 1509
Abstract
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical [...] Read more.
Electrical impedance tomography (EIT) is a non-invasive imaging modality used for estimating the conductivity of an object Ω from boundary electrode measurements. In recent years, researchers achieved substantial progress in analytical and numerical methods for the EIT inverse problem. Despite the success, numerical instability is still a major hurdle due to many factors, including the discretization error of the problem. Furthermore, most algorithms with good performance are relatively time consuming and do not allow real-time applications. In our approach, the goal is to separate the unknown conductivity into two regions, namely the region of homogeneous background conductivity and the region of non-homogeneous conductivity. Therefore, we pose and solve the problem of shape reconstruction using machine learning. We propose a novel and simple jet intriguing neural network architecture capable of solving the EIT inverse problem. It addresses previous difficulties, including instability, and is easily adaptable to other ill-posed coefficient inverse problems. That is, the proposed model estimates the probability for a point of whether the conductivity belongs to the background region or to the non-homogeneous region on the continuous space RdΩ with d{2,3}. The proposed model does not make assumptions about the forward model and allows for solving the inverse problem in real time. The proposed machine learning approach for shape reconstruction is also used to improve gradient-based methods for estimating the unknown conductivity. In this paper, we propose a piece-wise constant reconstruction method that is novel in the inverse problem setting but inspired by recent approaches from the 3D vision community. We also extend this method into a novel constrained reconstruction method. We present extensive numerical experiments to show the performance of the architecture and compare the proposed method with previous analytic algorithms, mainly the monotonicity-based shape reconstruction algorithm and iteratively regularized Gauss–Newton method. Full article
Show Figures

Figure 1

23 pages, 5473 KiB  
Article
Feature Selection Using New Version of V-Shaped Transfer Function for Salp Swarm Algorithm in Sentiment Analysis
by Dinar Ajeng Kristiyanti, Imas Sukaesih Sitanggang, Annisa and Sri Nurdiati
Computation 2023, 11(3), 56; https://doi.org/10.3390/computation11030056 - 08 Mar 2023
Cited by 10 | Viewed by 1918
Abstract
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a [...] Read more.
(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a binary version of a metaheuristic optimization algorithm based on Swarm Intelligence, namely the Salp Swarm Algorithm (SSA), as feature selection in sentiment analysis. (2) Methods: Significant feature subsets were selected using the SSA. Transfer functions with various types of the form S-TF, V-TF, X-TF, U-TF, Z-TF, and the new type V-TF with a simpler mathematical formula are used as a binary version approach to enable search agents to move in the search space. The stages of the study include data pre-processing, feature selection using SSA-TF and other conventional feature selection methods, modelling using K-Nearest Neighbor (KNN), Support Vector Machine, and Naïve Bayes, and model evaluation. (3) Results: The results showed an increase of 31.55% to the best accuracy of 80.95% for the KNN model using SSA-based New V-TF. (4) Conclusions: We have found that SSA-New V3-TF is a feature selection method with the highest accuracy and less runtime compared to other algorithms in sentiment analysis. Full article
Show Figures

Figure 1

28 pages, 7982 KiB  
Article
Remora Optimization Algorithm with Enhanced Randomness for Large-Scale Measurement Field Deployment Technology
by Dongming Yan, Yue Liu, Lijuan Li, Xuezhu Lin and Lili Guo
Entropy 2023, 25(3), 450; https://doi.org/10.3390/e25030450 - 04 Mar 2023
Cited by 2 | Viewed by 1247
Abstract
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To [...] Read more.
In the large-scale measurement field, deployment planning usually uses the Monte Carlo method for simulation analysis, which has high algorithm complexity. At the same time, traditional station planning is inefficient and unable to calculate overall accessibility due to the occlusion of tooling. To solve this problem, in this study, we first introduced a Poisson-like randomness strategy and an enhanced randomness strategy to improve the remora optimization algorithm (ROA), i.e., the PROA. Simultaneously, its convergence speed and robustness were verified in different dimensions using the CEC benchmark function. The convergence speed of 67.5–74% of the results is better than the ROA, and the robustness results of 66.67–75% are better than those of the ROA. Second, a deployment model was established for the large-scale measurement field to obtain the maximum visible area of the target to be measured. Finally, the PROA was used as the optimizer to solve optimal deployment planning; the performance of the PROA was verified by simulation analysis. In the case of six stations, the maximum visible area of the PROA reaches 83.02%, which is 18.07% higher than that of the ROA. Compared with the traditional method, this model shortens the deployment time and calculates the overall accessibility, which is of practical significance for improving assembly efficiency in large-size measurement field environments. Full article
Show Figures

Figure 1

19 pages, 1825 KiB  
Review
Introduction of Materials Genome Technology and Its Applications in the Field of Biomedical Materials
by Yashi Qiu, Zhaoying Wu, Jiali Wang, Chao Zhang and Heye Zhang
Materials 2023, 16(5), 1906; https://doi.org/10.3390/ma16051906 - 25 Feb 2023
Cited by 2 | Viewed by 1676
Abstract
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this [...] Read more.
Traditional research and development (R&D) on biomedical materials depends heavily on the trial and error process, thereby leading to huge economic and time burden. Most recently, materials genome technology (MGT) has been recognized as an effective approach to addressing this problem. In this paper, the basic concepts involved in the MGT are introduced, and the applications of MGT in the R&D of metallic, inorganic non-metallic, polymeric, and composite biomedical materials are summarized; in view of the existing limitations of MGT for R&D of biomedical materials, potential strategies are proposed on the establishment and management of material databases, the upgrading of high-throughput experimental technology, the construction of data mining prediction platforms, and the training of relevant materials talents. In the end, future trend of MGT for R&D of biomedical materials is proposed. Full article
Show Figures

Figure 1

23 pages, 3571 KiB  
Article
Parametric Analysis of Thick FGM Plates Based on 3D Thermo-Elasticity Theory: A Proper Generalized Decomposition Approach
by Mohammad-Javad Kazemzadeh-Parsi, Amine Ammar and Francisco Chinesta
Materials 2023, 16(4), 1753; https://doi.org/10.3390/ma16041753 - 20 Feb 2023
Cited by 2 | Viewed by 1365
Abstract
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal [...] Read more.
In the present work, the general and well-known model reduction technique, PGD (Proper Generalized Decomposition), is used for parametric analysis of thermo-elasticity of FGMs (Functionally Graded Materials). The FGMs have important applications in space technologies, especially when a part undergoes an extreme thermal environment. In the present work, material gradation is considered in one, two and three directions, and 3D heat transfer and theory of elasticity equations are solved to have an accurate temperature field and be able to consider all shear deformations. A parametric analysis of FGM materials is especially useful in material design and optimization. In the PGD technique, the field variables are separated to a set of univariate functions, and the high-dimensional governing equations reduce to a set of one-dimensional problems. Due to the curse of dimensionality, solving a high-dimensional parametric problem is considerably more computationally intensive than solving a set of one-dimensional problems. Therefore, the PGD makes it possible to handle high-dimensional problems efficiently. In the present work, some sample examples in 4D and 5D computational spaces are solved, and the results are presented. Full article
Show Figures

Figure 1

10 pages, 545 KiB  
Article
Quick Estimate of Information Decomposition for Text Style Transfer
by Viacheslav Shibaev, Eckehard Olbrich, Jürgen Jost and Ivan P. Yamshchikov
Entropy 2023, 25(2), 322; https://doi.org/10.3390/e25020322 - 10 Feb 2023
Cited by 1 | Viewed by 1311
Abstract
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to [...] Read more.
A growing number of papers on style transfer for texts rely on information decomposition. The performance of the resulting systems is usually assessed empirically in terms of the output quality or requires laborious experiments. This paper suggests a straightforward information theoretical framework to assess the quality of information decomposition for latent representations in the context of style transfer. Experimenting with several state-of-the-art models, we demonstrate that such estimates could be used as a fast and straightforward health check for the models instead of more laborious empirical experiments. Full article
Show Figures

Figure 1

20 pages, 1713 KiB  
Review
A Survey on the Application of Machine Learning in Turbulent Flow Simulations
by Maciej Majchrzak, Katarzyna Marciniak-Lukasiak and Piotr Lukasiak
Energies 2023, 16(4), 1755; https://doi.org/10.3390/en16041755 - 09 Feb 2023
Cited by 3 | Viewed by 2037
Abstract
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such [...] Read more.
As early as at the end of the 19th century, shortly after mathematical rules describing fluid flow—such as the Navier–Stokes equations—were developed, the idea of using them for flow simulations emerged. However, it was soon discovered that the computational requirements of problems such as atmospheric phenomena and engineering calculations made hand computation impractical. The dawn of the computer age also marked the beginning of computational fluid mechanics and their subsequent popularization made computational fluid dynamics one of the common tools used in science and engineering. From the beginning, however, the method has faced a trade-off between accuracy and computational requirements. The purpose of this work is to examine how the results of recent advances in machine learning can be applied to further develop the seemingly plateaued method. Examples of applying this method to improve various types of computational flow simulations, both by increasing the accuracy of the results obtained and reducing calculation times, have been reviewed in the paper as well as the effectiveness of the methods presented, the chances of their acceptance by industry, including possible obstacles, and potential directions for their development. One can observe an evolution of solutions from simple determination of closure coefficients through to more advanced attempts to use machine learning as an alternative to the classical methods of solving differential equations on which computational fluid dynamics is based up to turbulence models built solely from neural networks. A continuation of these three trends may lead to at least a partial replacement of Navier–Stokes-based computational fluid dynamics by machine-learning-based solutions. Full article
Show Figures

Figure 1

14 pages, 3393 KiB  
Article
Predicting Terrestrial Heat Flow in North China Using Multiple Geological and Geophysical Datasets Based on Machine Learning Method
by Shan Xu, Chang Ni and Xiangyun Hu
Energies 2023, 16(4), 1620; https://doi.org/10.3390/en16041620 - 06 Feb 2023
Cited by 2 | Viewed by 1203
Abstract
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed [...] Read more.
Geothermal heat flow is an essential parameter for the exploration of geothermal energy. The cost is often prohibitive if dense heat flow measurements are arranged in the study area. Regardless, an increase in the limited and sparse heat flow observation points is needed to study the regional geothermal setting. This research is significant in order to provide a new reliable map of terrestrial heat flow for the subsequent development of geothermal resources. The Gradient Boosted Regression Tree (GBRT) prediction model used in this paper is devoted to solving the problem of an insufficient number of heat flow observations in North China. It considers the geological and geophysical information in the region by training the sample data using 12 kinds of geological and geophysical features. Finally, a robust GBRT prediction model was obtained. The performance of the GBRT method was evaluated by comparing it with the kriging interpolation, the minimum curvature interpolation, and the 3D interpolation algorithm through the prediction performance analysis. Based on the GBRT prediction model, a new heat flow map with a resolution of 0.25°×0.25° was proposed, which depicted the terrestrial heat flow distribution in the study area in a more detailed and reasonable way than the interpolation results. The high heat flow values were mostly concentrated in the northeastern boundary of the Tibet Plateau, with a few scattered and small-scale high heat flow areas in the southeastern part of the North China Craton (NCC) adjacent to the Pacific Ocean. The low heat flow values were mainly resolved in the northern part of the Trans-North China Orogenic belt (TNCO) and the southmost part of the NCC. By comparing the predicted heat flow map with the plate tectonics, the olivine-Mg#, and the hot spring distribution in North China, we found that the GBRT could obtain a reliable result under the constraint of geological and geophysical information in regions with scarce and unevenly distributed heat flow observations. Full article
Show Figures

Figure 1

15 pages, 3413 KiB  
Article
Mobile Application for Tomato Plant Leaf Disease Detection Using a Dense Convolutional Network Architecture
by Intan Nurma Yulita, Naufal Ariful Amri and Akik Hidayat
Computation 2023, 11(2), 20; https://doi.org/10.3390/computation11020020 - 31 Jan 2023
Cited by 7 | Viewed by 3571
Abstract
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be [...] Read more.
In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be preserved with the aid of computer technology. It can identify diseases in tomato plant leaves. An algorithm for deep learning with a DenseNet architecture was implemented in this study. Multiple hyperparameter tests were conducted to determine the optimal model. Using two hidden layers, a DenseNet trainable layer on dense block 5, and a dropout rate of 0.4, the optimal model was constructed. The 10-fold cross-validation evaluation of the model yielded an accuracy value of 95.7 percent and an F1-score of 95.4 percent. To recognize tomato plant leaves, the model with the best assessment results was implemented in a mobile application. Full article
Show Figures

Figure 1

0 pages, 799 KiB  
Article
Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace
by Xi Zhang, Guiyun Zhang, Dong Zhang and Liping Zhang
Materials 2023, 16(3), 1164; https://doi.org/10.3390/ma16031164 - 30 Jan 2023
Cited by 4 | Viewed by 1325 | Correction
Abstract
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to [...] Read more.
With the special porous structure and super-long carbon sequestration characteristic, the biochar has shown to have potential in improving soil fertility, reducing carbon emissions and increasing soil carbon sequestration. However, the biochar technology has not been applied on a large scale, due to the complex structure, long transportation distance of raw materials, and high cost. To overcome these issues, the brazier-type gasification and carbonization furnace is designed to carry out dry distillation, anaerobic carbonization and have a high carbonization rate under high-temperature conditions. To improve the operation and maintenance efficiency, we formulate the operation of the brazier-type gasification and carbonization furnace as a dynamic multi-objective optimization problem (DMOP). Firstly, we analyze the dynamic factors in the work process of the brazier-type gasification and carbonization furnace, such as the equipment capacity, the operating conditions, and the biomass treated by the furnace. Afterward, we select the biochar yield and carbon monoxide emission as the dynamic objectives and model the DMOP. Finally, we apply three dynamic multiobjective evolutionary algorithms to solve the optimization problem so as to verify the effectiveness of the dynamic optimization approach in the gasification and carbonization furnace. Full article
Show Figures

Figure 1

17 pages, 593 KiB  
Article
Optimizing Automated Trading Systems with Deep Reinforcement Learning
by Minh Tran, Duc Pham-Hi and Marc Bui
Algorithms 2023, 16(1), 23; https://doi.org/10.3390/a16010023 - 01 Jan 2023
Cited by 7 | Viewed by 5569
Abstract
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency [...] Read more.
In this paper, we propose a novel approach to optimize parameters for strategies in automated trading systems. Based on the framework of Reinforcement learning, our work includes the development of a learning environment, state representation, reward function, and learning algorithm for the cryptocurrency market. Considering two simple objective functions, cumulative return and Sharpe ratio, the results showed that Deep Reinforcement Learning approach with Double Deep Q-Network setting and the Bayesian Optimization approach can provide positive average returns. Among the settings being studied, Double Deep Q-Network setting with Sharpe ratio as reward function is the best Q-learning trading system. With a daily trading goal, the system shows outperformed results in terms of cumulative return, volatility and execution time when compared with the Bayesian Optimization approach. This helps traders to make quick and efficient decisions with the latest information from the market. In long-term trading, Bayesian Optimization is a method of parameter optimization that brings higher profits. Deep Reinforcement Learning provides solutions to the high-dimensional problem of Bayesian Optimization in upcoming studies such as optimizing portfolios with multiple assets and diverse trading strategies. Full article
Show Figures

Figure 1

22 pages, 1687 KiB  
Article
Improved Anomaly Detection by Using the Attention-Based Isolation Forest
by Lev Utkin, Andrey Ageev, Andrei Konstantinov and Vladimir Muliukha
Algorithms 2023, 16(1), 19; https://doi.org/10.3390/a16010019 - 28 Dec 2022
Cited by 1 | Viewed by 2881
Abstract
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly [...] Read more.
A new modification of the isolation forest called the attention-based isolation forest (ABIForest) is proposed for solving the anomaly detection problem. It incorporates an attention mechanism in the form of Nadaraya–Watson regression into the isolation forest to improve the solution of the anomaly detection problem. The main idea underlying the modification is the assignment of attention weights to each path of trees with learnable parameters depending on the instances and trees themselves. Huber’s contamination model is proposed to be used to define the attention weights and their parameters. As a result, the attention weights are linearly dependent on learnable attention parameters that are trained by solving a standard linear or quadratic optimization problem. ABIForest can be viewed as the first modification of the isolation forest to incorporate an attention mechanism in a simple way without applying gradient-based algorithms. Numerical experiments with synthetic and real datasets illustrate that the results of ABIForest outperform those of other methods. The code of the proposed algorithms has been made available. Full article
Show Figures

Figure 1

13 pages, 432 KiB  
Article
Forecasting for Chaotic Time Series Based on GRP-lstmGAN Model: Application to Temperature Series of Rotary Kiln
by Wenyu Hu and Zhizhong Mao
Entropy 2023, 25(1), 52; https://doi.org/10.3390/e25010052 - 27 Dec 2022
Cited by 4 | Viewed by 1149
Abstract
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is [...] Read more.
Rotary kiln temperature forecasting plays a significant part of the automatic control of the sintering process. However, accurate forecasts are difficult owing to the complex nonlinear characteristics of rotary kiln temperature time series. With the development of chaos theory, the prediction accuracy is improved by analyzing the essential characteristics of time series. However, the existing prediction methods of chaotic time series cannot fully consider the local and global characteristics of time series at the same time. Therefore, in this study, the global recurrence plot (GRP)-based generative adversarial network (GAN) and the long short-term memory (LSTM) combination method, named GRP-lstmGAN, are proposed, which can effectively display important information about time scales. First, the data is subjected to a series of pre-processing operations, including data smoothing. Then, transforming one-dimensional time series into two-dimensional images by GRP makes full use of the global and local information of time series. Finally, the combination of LSTM and improves GAN models for temperature time series prediction. The experimental results show that our model is better than comparison models. Full article
Show Figures

Figure 1

15 pages, 1588 KiB  
Article
Cluster-Based Structural Redundancy Identification for Neural Network Compression
by Tingting Wu, Chunhe Song, Peng Zeng and Changqing Xia
Entropy 2023, 25(1), 9; https://doi.org/10.3390/e25010009 - 21 Dec 2022
Cited by 1 | Viewed by 1495
Abstract
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing [...] Read more.
The increasingly large structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works typically compress models based on importance, removing unimportant filters. This paper reconsiders model pruning from the perspective of structural redundancy, claiming that identifying functionally similar filters plays a more important role, and proposes a model pruning framework for clustering-based redundancy identification. First, we perform cluster analysis on the filters of each layer to generate similar sets with different functions. We then propose a criterion for identifying redundant filters within similar sets. Finally, we propose a pruning scheme that automatically determines the pruning rate of each layer. Extensive experiments on various benchmark network architectures and datasets demonstrate the effectiveness of our proposed framework. Full article
Show Figures

Figure 1

19 pages, 1030 KiB  
Article
A Dual-Population-Based NSGA-III for Constrained Many-Objective Optimization
by Huantong Geng, Zhengli Zhou, Junye Shen and Feifei Song
Entropy 2023, 25(1), 13; https://doi.org/10.3390/e25010013 - 21 Dec 2022
Cited by 2 | Viewed by 1374
Abstract
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting [...] Read more.
The main challenge for constrained many-objective optimization problems (CMaOPs) is how to achieve a balance between feasible and infeasible solutions. Most of the existing constrained many-objective evolutionary algorithms (CMaOEAs) are feasibility-driven, neglecting the maintenance of population convergence and diversity when dealing with conflicting objectives and constraints. This might lead to the population being stuck at some locally optimal or locally feasible regions. To alleviate the above challenges, we proposed a dual-population-based NSGA-III, named DP-NSGA-III, where the two populations exchange information through the offspring. The main population based on the NSGA-III solves CMaOPs and the auxiliary populations with different environment selection ignore the constraints. In addition, we designed an ε-constraint handling method in combination with NSGA-III, aiming to exploit the excellent infeasible solutions in the main population. The proposed DP-NSGA-III is compared with four state-of-the-art CMaOEAs on a series of benchmark problems. The experimental results show that the proposed evolutionary algorithm is highly competitive in solving CMaOPs. Full article
Show Figures

Figure 1

16 pages, 386 KiB  
Article
Initial Solution Generation and Diversified Variable Picking in Local Search for (Weighted) Partial MaxSAT
by Zaijun Zhang, Jincheng Zhou, Xiaoxia Wang, Heng Yang and Yi Fan
Entropy 2022, 24(12), 1846; https://doi.org/10.3390/e24121846 - 18 Dec 2022
Cited by 1 | Viewed by 1317
Abstract
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the [...] Read more.
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the (W)PMS problem. Our initialization strategy is based on a novel definition of variables’ structural entropy, and it aims to generate a solution that is close to a high-quality feasible one. Then, our diversification strategy picks a variable in two possible ways, depending on a parameter: continuing to pick variables with the best benefits or focusing on a clause with the greatest penalty and then selecting variables probabilistically. Based on these strategies, we developed a local search solver dubbed ImSATLike, as well as a hybrid solver ImSATLike-TT, and experimental results on (weighted) partial MaxSAT instances in recent MaxSAT Evaluations show that they outperform or have nearly the same performances as state-of-the-art local search and hybrid competitors, respectively, in general. Furthermore, we carried out experiments to confirm the individual impacts of each proposed strategy. Full article
26 pages, 6301 KiB  
Article
Advanced Spatial and Technological Aggregation Scheme for Energy System Models
by Shruthi Patil, Leander Kotzur and Detlef Stolten
Energies 2022, 15(24), 9517; https://doi.org/10.3390/en15249517 - 15 Dec 2022
Cited by 3 | Viewed by 1385
Abstract
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system [...] Read more.
Energy system models that consider variable renewable energy sources (VRESs) are computationally complex. The greater spatial scope and level of detail entailed in the models exacerbates complexity. As a complexity-reduction approach, this paper considers the simultaneous spatial and technological aggregation of energy system models. To that end, a novel two-step aggregation scheme is introduced. First, model regions are spatially aggregated to obtain a reduced region set. The aggregation is based on model parameters such as VRES time series, capacities, etc. In addition, spatial contiguity of regions is considered. Next, technological aggregation is performed on each VRES, in each region, based on their time series. The aggregations’ impact on accuracy and complexity of a cost-optimal, European energy system model is analyzed. The model is aggregated to obtain different combinations of numbers of regions and VRES types. Results are benchmarked against an initial resolution of 96 regions, with 68 VRES types in each. System cost deviates significantly when lower numbers of regions and/or VRES types are considered. As spatial and technological resolutions increase, the cost fluctuates initially and stabilizes eventually, approaching the benchmark. Optimal combination is determined based on an acceptable cost deviation of <5% and the point of stabilization. A total of 33 regions with 38 VRES types in each is deemed optimal. Here, the cost is underestimated by 4.42%, but the run time is reduced by 92.95%. Full article
Show Figures

Figure 1

18 pages, 2772 KiB  
Article
Curriculum Reinforcement Learning Based on K-Fold Cross Validation
by Zeyang Lin, Jun Lai, Xiliang Chen, Lei Cao and Jun Wang
Entropy 2022, 24(12), 1787; https://doi.org/10.3390/e24121787 - 06 Dec 2022
Cited by 9 | Viewed by 1941
Abstract
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert [...] Read more.
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert experience and a single network, which has the problems of difficult curriculum task ranking and slow convergence speed. In this paper, we propose a curriculum reinforcement learning method based on K-Fold Cross Validation that can estimate the relativity score of task curriculum difficulty. Drawing lessons from the human concept of curriculum learning from easy to difficult, this method divides automatic curriculum learning into a curriculum difficulty assessment stage and a curriculum sorting stage. Through parallel training of the teacher model and cross-evaluation of task sample difficulty, the method can better sequence curriculum learning tasks. Finally, simulation comparison experiments were carried out in two types of multi-agent experimental environments. The experimental results show that the automatic curriculum learning method based on K-Fold cross-validation can improve the training speed of the MADDPG algorithm, and at the same time has a certain generality for multi-agent deep reinforcement learning algorithm based on the replay buffer mechanism. Full article
Show Figures

Figure 1

14 pages, 1271 KiB  
Article
Applications of Virtual Machine Using Multi-Objective Optimization Scheduling Algorithm for Improving CPU Utilization and Energy Efficiency in Cloud Computing
by Rajkumar Choudhary and Suresh Perinpanayagam
Energies 2022, 15(23), 9164; https://doi.org/10.3390/en15239164 - 02 Dec 2022
Cited by 6 | Viewed by 1476
Abstract
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of [...] Read more.
Financial costs and energy savings are considered to be more critical on average for computationally intensive workflows, as such workflows which generally require extended execution times, and thus, require efficient energy consumption and entail a high financial cost. Through the effective utilization of scheduled gaps, the total execution time in a workflow can be decreased by placing uncompleted tasks in the gaps through approximate computations. In the current research, a novel approach based on multi-objective optimization is utilized with CloudSim as the underlying simulator in order to evaluate the VM (virtual machine) allocation performance. In this study, we determine the energy consumption, CPU utilization, and number of executed instructions in each scheduling interval for complex VM scheduling solutions to improve the energy efficiency and reduce the execution time. Finally, based on the simulation results and analyses, all of the tested parameters are simulated and evaluated with a proper validation in CloudSim. Based on the results, multi-objective PSO (particle swarm optimization) optimization can achieve better and more efficient effects for different parameters than multi-objective GA (genetic algorithm) optimization can. Full article
Show Figures

Figure 1

22 pages, 2191 KiB  
Article
Improved Black Widow Spider Optimization Algorithm Integrating Multiple Strategies
by Chenxin Wan, Bitao He, Yuancheng Fan, Wei Tan, Tao Qin and Jing Yang
Entropy 2022, 24(11), 1640; https://doi.org/10.3390/e24111640 - 11 Nov 2022
Cited by 12 | Viewed by 1580
Abstract
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced [...] Read more.
The black widow spider optimization algorithm (BWOA) had the problems of slow convergence speed and easily to falling into local optimum mode. To address these problems, this paper proposes a multi-strategy black widow spider optimization algorithm (IBWOA). First, Gauss chaotic mapping is introduced to initialize the population to ensure the diversity of the algorithm at the initial stage. Then, the sine cosine strategy is introduced to perturb the individuals during iteration to improve the global search ability of the algorithm. In addition, the elite opposition-based learning strategy is introduced to improve convergence speed of algorithm. Finally, the mutation method of the differential evolution algorithm is integrated to reorganize the individuals with poor fitness values. Through the analysis of the optimization results of 13 benchmark test functions and a part of CEC2017 test functions, the effectiveness and rationality of each improved strategy are verified. Moreover, it shows that the proposed algorithm has significant improvement in solution accuracy, performance and convergence speed compared with other algorithms. Furthermore, the IBWOA algorithm is used to solve six practical constrained engineering problems. The results show that the IBWOA has excellent optimization ability and scalability. Full article
Show Figures

Figure 1

12 pages, 3383 KiB  
Article
An HGA-LSTM-Based Intelligent Model for Ore Pulp Density in the Hydrometallurgical Process
by Guobin Zou, Junwu Zhou, Kang Li and Hongliang Zhao
Materials 2022, 15(21), 7586; https://doi.org/10.3390/ma15217586 - 28 Oct 2022
Cited by 2 | Viewed by 1055
Abstract
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming [...] Read more.
This study focused on the intelligent model for ore pulp density in the hydrometallurgical process. However, owing to the limitations of existing instruments and devices, the feed ore pulp density of thickener, a key hydrometallurgical equipment, cannot be accurately measured online. Therefore, aiming at the problem of accurately measuring the feed ore pulp density, we proposed a new intelligent model based on the long short-term memory (LSTM) and hybrid genetic algorithm (HGA). Specifically, the HGA refers to a novel optimization search algorithm model that can optimize the hyperparameters and improve the modeling performance of the LSTM. Finally, the proposed intelligent model was successfully applied to an actual thickener case in China. The intelligent model prediction results demonstrated that the hybrid model outperformed other models and satisfied the measurement accuracy requirements in the factory well. Full article
Show Figures

Figure 1

17 pages, 7018 KiB  
Article
Research on Joint Resource Allocation for Multibeam Satellite Based on Metaheuristic Algorithms
by Wei Gao, Lei Wang and Lianzheng Qu
Entropy 2022, 24(11), 1536; https://doi.org/10.3390/e24111536 - 26 Oct 2022
Viewed by 1260
Abstract
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal [...] Read more.
With the rapid growth of satellite communication demand and the continuous development of high-throughput satellite systems, the satellite resource allocation problem—also called the dynamic resources management (DRM) problem—has become increasingly complex in recent years. The use of metaheuristic algorithms to obtain acceptable optimal solutions has become a hot topic in research and has the potential to be explored further. In particular, the treatment of invalid solutions is the key to algorithm performance. At present, the unused bandwidth allocation (UBA) method is commonly used to address the bandwidth constraint in the DRM problem. However, this method reduces the algorithm’s flexibility in the solution space, diminishes the quality of the optimized solution, and increases the computational complexity. In this paper, we propose a bandwidth constraint handling approach based on the non-dominated beam coding (NDBC) method, which can eliminate the bandwidth overlap constraint in the algorithm’s population evolution and achieve complete bandwidth flexibility in order to increase the quality of the optimal solution while decreasing the computational complexity. We develop a generic application architecture for metaheuristic algorithms using the NDBC method and successfully apply it to four typical algorithms. The results indicate that NDBC can enhance the quality of the optimized solution by 9–33% while simultaneously reducing computational complexity by 9–21%. Full article
Show Figures

Figure 1

19 pages, 3927 KiB  
Article
Model NOx, SO2 Emissions Concentration and Thermal Efficiency of CFBB Based on a Hyper-Parameter Self-Optimized Broad Learning System
by Yunpeng Ma, Chenheng Xu, Hua Wang, Ran Wang, Shilin Liu and Xiaoying Gu
Energies 2022, 15(20), 7700; https://doi.org/10.3390/en15207700 - 18 Oct 2022
Cited by 3 | Viewed by 1337
Abstract
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the [...] Read more.
At present, establishing a multidimensional characteristic model of a boiler combustion system plays an important role in realizing its dynamic optimization and real-time control, so as to achieve the purpose of reducing environmental pollution and saving coal resources. However, the complexity of the boiler combustion process makes it difficult to model it using traditional mathematical methods. In this paper, a kind of hyper-parameter self-optimized broad learning system by a sparrow search algorithm is proposed to model the NOx, SO2 emissions concentration and thermal efficiency of a circulation fluidized bed boiler (CFBB). A broad learning system (BLS) is a novel neural network algorithm, which shows good performance in multidimensional feature learning. However, the BLS has several hyper-parameters to be set in a wide range, so that the optimal combination between hyper-parameters is difficult to determine. This paper uses a sparrow search algorithm (SSA) to select the optimal hyper-parameters combination of the broad learning system, namely as SSA-BLS. To verify the effectiveness of SSA-BLS, ten benchmark regression datasets are applied. Experimental results show that SSA-BLS obtains good regression accuracy and model stability. Additionally, the proposed SSA-BLS is applied to model the combustion process parameters of a 330 MW circulating fluidized bed boiler. Experimental results reveal that SSA-BLS can establish the accurate prediction models for thermal efficiency, NOx emission concentration and SO2 emission concentration, separately. Altogether, SSA-BLS is an effective modelling method. Full article
Show Figures

Graphical abstract

13 pages, 2377 KiB  
Article
A Pattern-Recognizer Artificial Neural Network for the Prediction of New Crescent Visibility in Iraq
by Ziyad T. Allawi
Computation 2022, 10(10), 186; https://doi.org/10.3390/computation10100186 - 13 Oct 2022
Cited by 5 | Viewed by 1860
Abstract
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories [...] Read more.
Various theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories use interpolation and extrapolation techniques to identify sighting regions through such data. In this study, a pattern recognizer artificial neural network was trained to distinguish between visibility regions. Essential parameters of crescent moon sighting were collected from moon sight datasets and used to build an intelligent system of pattern recognition to predict the crescent sight conditions. The proposed ANN learned the datasets with an accuracy of more than 72% in comparison to the actual observational results. ANN simulation gives a clear insight into three crescent moon visibility regions: invisible (I), probably visible (P), and certainly visible (V). The proposed ANN is suitable for building lunar calendars, so it was used to build a four-year calendar on the horizon of Baghdad. The built calendar was compared with the official Hijri calendar in Iraq. Full article
Show Figures

Figure 1

22 pages, 4366 KiB  
Article
Shear Strength Prediction Model for RC Exterior Joints Using Gene Expression Programming
by Moiz Tariq, Azam Khan and Asad Ullah
Materials 2022, 15(20), 7076; https://doi.org/10.3390/ma15207076 - 12 Oct 2022
Viewed by 1423
Abstract
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant [...] Read more.
Predictive models were developed to effectively estimate the RC exterior joint’s shear strength using gene expression programming (GEP). Two separate models are proposed for the exterior joints: the first with shear reinforcement and the second without shear reinforcement. Experimental results of the relevant input parameters using 253 tests were extracted from the literature to carry out a knowledge analysis of GEP. The database was further divided into two portions: 152 exterior joint experiments with joint transverse reinforcements and 101 unreinforced joint specimens. Moreover, the effects of different material and geometric factors (usually ignored in the available models) were incorporated into the proposed models. These factors are beam and column geometries, concrete and steel material properties, longitudinal and shear reinforcements, and column axial loads. Statistical analysis and comparisons with previously proposed analytical and empirical models indicate a high degree of accuracy of the proposed models, rendering them ideal for practical application. Full article
Show Figures

Figure 1

11 pages, 2520 KiB  
Article
Analysis of Vulnerability on Weighted Power Networks under Line Breakdowns
by Lixin Yang, Ziyu Gu, Yuanchen Dang and Peiyan He
Entropy 2022, 24(10), 1449; https://doi.org/10.3390/e24101449 - 11 Oct 2022
Cited by 3 | Viewed by 1183
Abstract
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted [...] Read more.
Vulnerability is a major concern for power networks. Malicious attacks have the potential to trigger cascading failures and large blackouts. The robustness of power networks against line failure has been of interest in the past several years. However, this scenario cannot cover weighted situations in the real world. This paper investigates the vulnerability of weighted power networks. Firstly, we propose a more practical capacity model to investigate the cascading failure of weighted power networks under different attack strategies. Results show that the smaller threshold of the capacity parameter can enhance the vulnerability of weighted power networks. Furthermore, a weighted electrical cyber-physical interdependent network is developed to study the vulnerability and failure dynamics of the entire power network. We perform simulations in the IEEE 118 Bus case to evaluate the vulnerability under various coupling schemes and different attack strategies. Simulation results show that heavier loads increase the likelihood of blackouts and that different coupling strategies play a crucial role in the cascading failure performance. Full article
Show Figures

Figure 1

18 pages, 1788 KiB  
Article
An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning
by Zhiyu Chen, Jianyu Ding, Fei Wu, Chi Zhang, Yiming Sun, Jing Sun, Shangdong Liu and Yimu Ji
Entropy 2022, 24(10), 1377; https://doi.org/10.3390/e24101377 - 27 Sep 2022
Cited by 2 | Viewed by 1401
Abstract
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical [...] Read more.
Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack. Full article
Show Figures

Figure 1

15 pages, 2387 KiB  
Article
Dynamic Programming BN Structure Learning Algorithm Integrating Double Constraints under Small Sample Condition
by Zhigang Lv, Yiwei Chen, Ruohai Di, Hongxi Wang, Xiaojing Sun, Chuchao He and Xiaoyan Li
Entropy 2022, 24(10), 1354; https://doi.org/10.3390/e24101354 - 24 Sep 2022
Viewed by 1182
Abstract
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this [...] Read more.
The Bayesian Network (BN) structure learning algorithm based on dynamic programming can obtain global optimal solutions. However, when the sample cannot fully contain the information of the real structure, especially when the sample size is small, the obtained structure is inaccurate. Therefore, this paper studies the planning mode and connotation of dynamic programming, restricts its process with edge and path constraints, and proposes a dynamic programming BN structure learning algorithm with double constraints under small sample conditions. The algorithm uses double constraints to limit the planning process of dynamic programming and reduces the planning space. Then, it uses double constraints to limit the selection of the optimal parent node to ensure that the optimal structure conforms to prior knowledge. Finally, the integrating prior-knowledge method and the non-integrating prior-knowledge method are simulated and compared. The simulation results verify the effectiveness of the method proposed and prove that the integrating prior knowledge can significantly improve the efficiency and accuracy of BN structure learning. Full article
Show Figures

Figure 1

26 pages, 15176 KiB  
Review
Optimization-Based High-Frequency Circuit Miniaturization through Implicit and Explicit Constraint Handling: Recent Advances
by Anna Pietrenko-Dabrowska, Slawomir Koziel and Marzieh Mahrokh
Energies 2022, 15(19), 6955; https://doi.org/10.3390/en15196955 - 22 Sep 2022
Cited by 2 | Viewed by 1109
Abstract
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. [...] Read more.
Miniaturization trends in high-frequency electronics have led to accommodation challenges in the integration of the corresponding components. Size reduction thereof has become a practical necessity. At the same time, the increasing performance demands imposed on electronic systems remain in conflict with component miniaturization. On the practical side, the challenges related to handling design constraints are aggravated by the high cost of system evaluation, normally requiring full-wave electromagnetic (EM) analysis. Some of these issues can be alleviated by implicit constraint handling using the penalty function approach. Yet, its performance depends on the arrangement of the penalty factors, necessitating a costly trial-and-error procedure to identify their optimum setup. A workaround is offered by the recently proposed algorithms with automatic adaptation of the penalty factors using different adjustment schemes. However, these intricate strategies require a continuous problem-dependent adaptation of the penalty function throughout the entire optimization process. Alternative methodologies have been proposed by taking an explicit approach to handle the inequality constraints, along with correction-based control over equality conditions, the combination of which proves to be demonstrably competitive for some miniaturization tasks. Nevertheless, optimization-based miniaturization, whether using implicit or explicit constraint handling, remains a computationally expensive task. A reliable way of reducing the aforementioned costs is the incorporation of multi-resolution EM fidelity models into the miniaturization procedure. Therein, the principal operation is based on the simultaneous monitoring of factors such as quality of the constraint satisfaction, as well as algorithm convergence status. This paper provides an overview of the abovementioned size-reduction algorithms, in which theoretical considerations are illustrated using a number of antenna and microwave circuit case studies. Full article
Show Figures

Figure 1

11 pages, 1459 KiB  
Article
Sensor Fusion for Occupancy Estimation: A Study Using Multiple Lecture Rooms in a Complex Building
by Cédric Roussel, Klaus Böhm and Pascal Neis
Mach. Learn. Knowl. Extr. 2022, 4(3), 803-813; https://doi.org/10.3390/make4030039 - 16 Sep 2022
Cited by 2 | Viewed by 2008
Abstract
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room [...] Read more.
This paper uses various machine learning methods which explore the combination of multiple sensors for quality improvement. It is known that a reliable occupancy estimation can help in many different cases and applications. For the containment of the SARS-CoV-2 virus, in particular, room occupancy is a major factor. The estimation can benefit visitor management systems in real time, but can also be predictive of room reservation strategies. By using different terminal and non-terminal sensors in different premises of varying sizes, this paper aims to estimate room occupancy. In the process, the proposed models are trained with different combinations of rooms in training and testing datasets to examine distinctions in the infrastructure of the considered building. The results indicate that the estimation benefits from a combination of different sensors. Additionally, it is found that a model should be trained with data from every room in a building and cannot be transferred to other rooms. Full article
Show Figures

Figure 1

22 pages, 4992 KiB  
Article
A Period-Based Neural Network Algorithm for Predicting Building Energy Consumption of District Heating
by Zhengchao Xie, Xiao Wang, Lijun Zheng, Hao Chang and Fei Wang
Energies 2022, 15(17), 6338; https://doi.org/10.3390/en15176338 - 30 Aug 2022
Viewed by 1284
Abstract
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a [...] Read more.
Northern China is vigorously promoting cogeneration and clean heating technologies. The accurate prediction of building energy consumption is the basis for heating regulation. In this paper, the daily, weekly, and annual periods of building energy consumption are determined by Fourier transformation. Accordingly, a period-based neural network (PBNN) is proposed to predict building energy consumption. The main innovation of PBNN is the introduction of a new data structure, which is a time-discontinuous sliding window. The sliding window consists of the past 24 h, 24 h for the same period last week, and 24 h for the same period the previous year. When predicting the building energy consumption for the next 1 h, 12 h, and 24 h, the prediction errors of the PBNN are 2.30%, 3.47%, and 3.66% lower than those of the traditional sliding window PBNN (TSW-PBNN), respectively. The training time of PBNN is approximately half that of TSW-PBNN. The time-discontinuous sliding window reduces the energy consumption prediction error and neural network model training time. Full article
Show Figures

Figure 1

18 pages, 1633 KiB  
Article
Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
by Shengxiang Hu, Bofeng Zhang, Hehe Lv, Furong Chang, Chenyang Zhou, Liangrui Wu and Guobing Zou
Entropy 2022, 24(9), 1213; https://doi.org/10.3390/e24091213 - 30 Aug 2022
Viewed by 1283
Abstract
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because [...] Read more.
Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because the random walk procedure is based on symmetric node similarity and fixed probability distribution, the sampled vertices’ sequences may lose local community structure information; secondly, because the feature extraction capacity of the shallow neural language model is limited, they can only extract the local structural features of networks; and thirdly, these approaches require specially designed mechanisms for different downstream tasks to integrate vertex attributes of various types. We conducted an in-depth investigation to address the aforementioned issues and propose a novel general NRL framework called dynamic structure and vertex attribute fusion network embedding, which firstly defines an asymmetric similarity and h-hop dynamic random walk strategy to guide the random walk process to preserve the network’s local community structure in walked vertex sequences. Next, we train a self-attention-based sequence prediction model on the walked vertex sequences to simultaneously learn the vertices’ local and global structural features. Finally, we introduce an attributes-driven Laplacian space optimization to converge the process of structural feature extraction and attribute feature extraction. The proposed approach is exhaustively evaluated by means of node visualization and classification on multiple benchmark datasets, and achieves superior results compared to baseline approaches. Full article
Show Figures

Figure 1