Evolutionary Computation 2022

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 29601

Special Issue Editor

Special Issue Information

Dear Colleagues,

Evolutionary computation (EC) is a family of algorithms for global optimization, inspired by biological evolution, which is a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In EC, each individual only has a simple structure and function. However, such systems, composed of many individuals, can demonstrate the phenomenon of emergence, and can address difficult real-world problems, which are impossible to solve by individuals. During recent decades, EC methods have been successfully applied to cope with complex and time-consuming problems. EC is, indeed, a topic of interest amongst researchers in various fields of science and engineering. The most popular EC paradigms are the genetic algorithm, ant colony optimization and particle swarm optimization. In general, EC has been theoretically and experimentally proved to have numerous significant properties, e.g., reasoning with vague and/or ambiguous data, adaptation to dynamic and uncertain environments, and learning from noisy and/or incomplete information.

The aim of this Special Issue is to compile the latest theory and applications in the field of EC. Submissions should be original and unpublished, and present novel in-depth fundamental research contributions, either from a methodological perspective or from an application point of view. In general, we are soliciting contributions on (but not limited to) the following topics:

  • Improvements of traditional EC methods (e.g., genetic algorithm, differential evolution, ant colony optimization and particle swarm optimization)
  • Recent development of EC methods (e.g., biogeography-based optimization, krill herd (KH) algorithm, monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, rhino herd (RH) algorithm)
  • Theoretical study on EC algorithms using various techniques (e.g., Markov chain, dynamic system, complex system/networks, and Martingale)
  • Application of EC methods (e.g., scheduling, data mining, machine learning, reliability, planning, task assignment problem, IIR filter design, traveling salesman problem, optimization under dynamic and uncertain environments).

Prof. Dr. Gaige Wang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 11437 KiB  
Article
Data-Driven GWO-BRNN-Based SOH Estimation of Lithium-Ion Batteries in EVs for Their Prognostics and Health Management
by Muhammad Waseem, Jingyuan Huang, Chak-Nam Wong and C. K. M. Lee
Mathematics 2023, 11(20), 4263; https://doi.org/10.3390/math11204263 - 12 Oct 2023
Cited by 2 | Viewed by 1033
Abstract
Due to the complexity of the aging process, maintaining the state of health (SOH) of lithium-ion batteries is a significant challenge that must be overcome. This study presents a new SOH estimation approach based on hybrid Grey Wolf Optimization (GWO) with Bayesian Regularized [...] Read more.
Due to the complexity of the aging process, maintaining the state of health (SOH) of lithium-ion batteries is a significant challenge that must be overcome. This study presents a new SOH estimation approach based on hybrid Grey Wolf Optimization (GWO) with Bayesian Regularized Neural Networks (BRNN). The approach utilizes health features (HFs) extracted from the battery charging-discharging process. Selected external voltage and current characteristics from the charging-discharging process serve as HFs to explain the aging mechanism of the batteries. The Pearson correlation coefficient, the Kendall rank correlation coefficient, and the Spearman rank correlation coefficient are then employed to select HFs that have a high degree of association with battery capacity. In this paper, GWO is introduced as a method for optimizing and selecting appropriate hyper-p parameters for BRNN. GWO-BRNN updates the population through mutation, crossover, and screening operations to obtain the globally optimal solution and improve the ability to conduct global searches. The validity of the proposed technique was assessed by examining the NASA battery dataset. Based on the simulation results, the presented approach demonstrates a higher level of accuracy. The proposed GWO-BRNN-based SOH estimation achieves estimate assessment indicators of less than 1%, significantly lower than the estimated results obtained by existing approaches. The proposed framework helps develop electric vehicle battery prognostics and health management for the widespread use of eco-friendly and reliable electric transportation. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

20 pages, 708 KiB  
Article
Deep Reinforcement Learning for the Agile Earth Observation Satellite Scheduling Problem
by Jie Chun, Wenyuan Yang, Xiaolu Liu, Guohua Wu, Lei He and Lining Xing
Mathematics 2023, 11(19), 4059; https://doi.org/10.3390/math11194059 - 25 Sep 2023
Cited by 2 | Viewed by 944
Abstract
The agile earth observation satellite scheduling problem (AEOSSP) is a combinatorial optimization problem with time-dependent constraints. Recently, many construction heuristics and meta-heuristics have been proposed; however, existing methods cannot balance the requirements of efficiency and timeliness. In this paper, we propose a graph [...] Read more.
The agile earth observation satellite scheduling problem (AEOSSP) is a combinatorial optimization problem with time-dependent constraints. Recently, many construction heuristics and meta-heuristics have been proposed; however, existing methods cannot balance the requirements of efficiency and timeliness. In this paper, we propose a graph attention network-based decision neural network (GDNN) to solve the AEOSSP. Specifically, we first represent the task and time-dependent attitude transition constraints by a graph. We then describe the problem as a Markov decision process and perform feature engineering. On this basis, we design a GDNN to guide the construction of the solution sequence and train it with proximal policy optimization (PPO). Experimental results show that the proposed method outperforms construction heuristics at scheduling profit by at least 45%. The proposed method can also calculate the approximate profits of the state-of-the-art method with an error of less than 7% and reduce scheduling time markedly. Finally, we demonstrate the scalability of the proposed method. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

16 pages, 429 KiB  
Article
Multi-Objective Gray Wolf Optimizer with Cost-Sensitive Feature Selection for Predicting Students’ Academic Performance in College English
by Liya Yue, Pei Hu, Shu-Chuan Chu and Jeng-Shyang Pan
Mathematics 2023, 11(15), 3396; https://doi.org/10.3390/math11153396 - 03 Aug 2023
Cited by 2 | Viewed by 660
Abstract
Feature selection is a widely utilized technique in educational data mining that aims to simplify and reduce the computational burden associated with data analysis. However, previous studies have overlooked the high costs involved in acquiring certain types of educational data. In this study, [...] Read more.
Feature selection is a widely utilized technique in educational data mining that aims to simplify and reduce the computational burden associated with data analysis. However, previous studies have overlooked the high costs involved in acquiring certain types of educational data. In this study, we investigate the application of a multi-objective gray wolf optimizer (GWO) with cost-sensitive feature selection to predict students’ academic performance in college English, while minimizing both prediction error and feature cost. To improve the performance of the multi-objective binary GWO, a novel position update method and a selection mechanism for a, b, and d are proposed. Additionally, the adaptive mutation of Pareto optimal solutions improves convergence and avoids falling into local traps. The repairing technique of duplicate solutions expands population diversity and reduces feature cost. Experiments using UCI datasets demonstrate that the proposed algorithm outperforms existing state-of-the-art algorithms in hypervolume (HV), inverted generational distance (IGD), and Pareto optimal solutions. Finally, when predicting the academic performance of students in college English, the superiority of the proposed algorithm is again confirmed, as well as its acquisition of key features that impact cost-sensitive feature selection. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

30 pages, 1240 KiB  
Article
Differential Evolution with Group-Based Competitive Control Parameter Setting for Numerical Optimization
by Mengnan Tian, Yanghan Gao, Xingshi He, Qingqing Zhang and Yanhui Meng
Mathematics 2023, 11(15), 3355; https://doi.org/10.3390/math11153355 - 31 Jul 2023
Cited by 2 | Viewed by 720
Abstract
Differential evolution (DE) is one of the most popular and widely used optimizers among the community of evolutionary computation. Despite numerous works having been conducted on the improvement of DE performance, there are still some defects, such as premature convergence and stagnation. In [...] Read more.
Differential evolution (DE) is one of the most popular and widely used optimizers among the community of evolutionary computation. Despite numerous works having been conducted on the improvement of DE performance, there are still some defects, such as premature convergence and stagnation. In order to alleviate them, this paper presents a novel DE variant by designing a new mutation operator (named “DE/current-to-pbest_id/1”) and a new control parameter setting. In the new operator, the fitness value of the individual is adopted to determine the chosen scope of its guider among the population. Meanwhile, a group-based competitive control parameter setting is presented to ensure the various search potentials of the population and the adaptivity of the algorithm. In this setting, the whole population is randomly divided into multiple equivalent groups, the control parameters for each group are independently generated based on its location information, and the worst location information among all groups is competitively updated with the current successful parameters. Moreover, a piecewise population size reduction mechanism is further devised to enhance the exploration and exploitation of the algorithm at the early and later evolution stages, respectively. Differing from the previous DE versions, the proposed method adaptively adjusts the search capability of each individual, simultaneously utilizes multiple pieces of successful parameter information to generate the control parameters, and has different speeds to reduce the population size at different search stages. Then it could achieve the well trade-off of exploration and exploitation. Finally, the performance of the proposed algorithm is measured by comparing with five well-known DE variants and five typical non-DE algorithms on the IEEE CEC 2017 test suite. Numerical results show that the proposed method is a more promising optimizer. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

36 pages, 2863 KiB  
Article
Optimizing a Multi-Layer Perceptron Based on an Improved Gray Wolf Algorithm to Identify Plant Diseases
by Chunguang Bi, Qiaoyun Tian, He Chen, Xianqiu Meng, Huan Wang, Wei Liu and Jianhua Jiang
Mathematics 2023, 11(15), 3312; https://doi.org/10.3390/math11153312 - 27 Jul 2023
Cited by 4 | Viewed by 1370
Abstract
Metaheuristic optimization algorithms play a crucial role in optimization problems. However, the traditional identification methods have the following problems: (1) difficulties in nonlinear data processing; (2) high error rates caused by local stagnation; and (3) low classification rates resulting from premature convergence. This [...] Read more.
Metaheuristic optimization algorithms play a crucial role in optimization problems. However, the traditional identification methods have the following problems: (1) difficulties in nonlinear data processing; (2) high error rates caused by local stagnation; and (3) low classification rates resulting from premature convergence. This paper proposed a variant based on the gray wolf optimization algorithm (GWO) with chaotic disturbance, candidate migration, and attacking mechanisms, naming it the enhanced gray wolf optimizer (EGWO), to solve the problem of premature convergence and local stagnation. The performance of the EGWO was tested on IEEE CEC 2014 benchmark functions, and the results of the EGWO were compared with the performance of three GWO variants, five traditional and popular algorithms, and six recent algorithms. In addition, EGWO optimized the weights and biases of a multi-layer perceptron (MLP) and proposed an EGWO-MLP disease identification model; the model was tested on IEEE CEC 2014 benchmark functions, and EGWO-MLP was verified by UCI dataset including Tic-Tac-Toe, Heart, XOR, and Balloon datasets. The experimental results demonstrate that the proposed EGWO-MLP model can effectively avoid local optimization problems and premature convergence and provide a quasi-optimal solution for the optimization problem. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

18 pages, 1406 KiB  
Article
Performance of an Adaptive Optimization Paradigm for Optimal Operation of a Mono-Switch Class E Induction Heating Application
by Saddam Aziz, Cheung-Ming Lai and Ka Hong Loo
Mathematics 2023, 11(13), 3020; https://doi.org/10.3390/math11133020 - 07 Jul 2023
Viewed by 761
Abstract
The progress of technology involves the continuous improvement of current machines to attain higher levels of energy efficiency, operational dependability, and effectiveness. Induction heating is a thermal process that involves the heating of materials that possess electrical conductivity, such as metals. This technique [...] Read more.
The progress of technology involves the continuous improvement of current machines to attain higher levels of energy efficiency, operational dependability, and effectiveness. Induction heating is a thermal process that involves the heating of materials that possess electrical conductivity, such as metals. This technique finds diverse applications, including induction welding and induction cooking pots. The optimization of the operating point of the inverter discussed in this study necessitated the resolution of a pair of non-convex mathematical models to enhance the energy efficiency of the inverters and mitigate switching losses. In order to determine the most advantageous operational location, a sophisticated surface optimization was conducted, requiring the implementation of a sophisticated optimization methodology, such as the adaptive black widow optimization algorithm. The methodology draws inspiration from the resourceful behavior of female black widow spiders in their quest for nourishment. Its straightforward control variable design and limited computational complexity make it a feasible option for addressing multi-dimensional engineering problems within confined constraints. The primary objective of utilizing the adaptive black widow optimization algorithm in the context of induction heating is to optimize the pertinent process parameters, including power level, frequency, coil design, and material properties, with the ultimate goal of efficiently achieving the desired heating outcomes. The utilization of the adaptive black widow optimization algorithm presents a versatile and robust methodology for addressing optimization problems in the field of induction heating. This is due to its capacity to effectively manage intricate, non-linear, and multi-faceted optimization predicaments. The adaptive black widow optimization algorithm has been modified in order to enhance the optimization process and guarantee the identification of the global optimum. The empirical findings derived from an authentic inverter setup were compared with the hypothetical results. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

18 pages, 697 KiB  
Article
Evolutionary Optimization of Energy Consumption and Makespan of Workflow Execution in Clouds
by Lining Xing, Jun Li, Zhaoquan Cai and Feng Hou
Mathematics 2023, 11(9), 2126; https://doi.org/10.3390/math11092126 - 30 Apr 2023
Cited by 1 | Viewed by 1112
Abstract
Making sound trade-offs between the energy consumption and the makespan of workflow execution in cloud platforms remains a significant but challenging issue. So far, some works balance workflows’ energy consumption and makespan by adopting multi-objective evolutionary algorithms, but they often regard this as [...] Read more.
Making sound trade-offs between the energy consumption and the makespan of workflow execution in cloud platforms remains a significant but challenging issue. So far, some works balance workflows’ energy consumption and makespan by adopting multi-objective evolutionary algorithms, but they often regard this as a black-box problem, resulting in the low efficiency of the evolutionary search. To compensate for the shortcomings of existing works, this paper mathematically formulates the cloud workflow scheduling for an infrastructure-as-a-service (IaaS) platform as a multi-objective optimization problem. Then, this paper tailors a knowledge-driven energy- and makespan-aware workflow scheduling algorithm, namely EMWSA. Specifically, a critical task adjustment-based local search strategy is proposed to intelligently adjust some critical tasks to the same resource of their successor tasks, striving to simultaneously reduce workflows’ energy consumption and makespan. Further, an idle gap reuse strategy is proposed to search the optimal energy consumption of each non-critical task without affecting the operation of other tasks, so as to further reduce energy consumption. Finally, in the context of real-world workflows and cloud platforms, we carry out comparative experiments to verify the superiority of the proposed EMWSA by significantly outperforming 4 representative baselines on 19 out of 20 workflow instances. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

32 pages, 792 KiB  
Article
NSGA-II/SDR-OLS: A Novel Large-Scale Many-Objective Optimization Method Using Opposition-Based Learning and Local Search
by Yingxin Zhang, Gaige Wang and Hongmei Wang
Mathematics 2023, 11(8), 1911; https://doi.org/10.3390/math11081911 - 18 Apr 2023
Cited by 1 | Viewed by 3089
Abstract
Recently, many-objective optimization problems (MaOPs) have become a hot issue of interest in academia and industry, and many more many-objective evolutionary algorithms (MaOEAs) have been proposed. NSGA-II/SDR (NSGA-II with a strengthened dominance relation) is an improved NSGA-II, created by replacing the traditional Pareto [...] Read more.
Recently, many-objective optimization problems (MaOPs) have become a hot issue of interest in academia and industry, and many more many-objective evolutionary algorithms (MaOEAs) have been proposed. NSGA-II/SDR (NSGA-II with a strengthened dominance relation) is an improved NSGA-II, created by replacing the traditional Pareto dominance relation with a new dominance relation, termed SDR, which is better than the original algorithm in solving small-scale MaOPs with few decision variables, but performs poorly in large-scale MaOPs. To address these problems, we added the following improvements to the NSGA-II/SDR to obtain NSGA-II/SDR-OLS, which enables it to better achieve a balance between population convergence and diversity when solving large-scale MaOPs: (1) The opposition-based learning (OBL) strategy is introduced in the initial population initialization stage, and the final initial population is formed by the initial population and the opposition-based population, which optimizes the quality and convergence of the population; (2) the local search (LS) strategy is introduced to expand the diversity of populations by finding neighborhood solutions, in order to avoid solutions falling into local optima too early. NSGA-II/SDR-OLS is compared with the original algorithm on nine benchmark problems to verify the effectiveness of its improvement. Then, we compare our algorithm with six existing algorithms, which are promising region-based multi-objective evolutionary algorithms (PREA), a scalable small subpopulation-based covariance matrix adaptation evolution strategy (S3-CMA-ES), a decomposition-based multi-objective evolutionary algorithm guided by growing neural gas (DEA-GNG), a reference vector-guided evolutionary algorithm (RVEA), NSGA-II with conflict-based partitioning strategy (NSGA-II-conflict), and a genetic algorithm using reference-point-based non-dominated sorting (NSGA-III).The proposed algorithm has achieved the best results in the vast majority of test cases, indicating that our algorithm has strong competitiveness. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

28 pages, 4023 KiB  
Article
Hybrid Learning Moth Search Algorithm for Solving Multidimensional Knapsack Problems
by Yanhong Feng, Hongmei Wang, Zhaoquan Cai, Mingliang Li and Xi Li
Mathematics 2023, 11(8), 1811; https://doi.org/10.3390/math11081811 - 11 Apr 2023
Cited by 2 | Viewed by 1156
Abstract
The moth search algorithm (MS) is a relatively new metaheuristic optimization algorithm which mimics the phototaxis and Lévy flights of moths. Being an NP-hard problem, the 0–1 multidimensional knapsack problem (MKP) is a classical multi-constraint complicated combinatorial optimization problem with numerous applications. In [...] Read more.
The moth search algorithm (MS) is a relatively new metaheuristic optimization algorithm which mimics the phototaxis and Lévy flights of moths. Being an NP-hard problem, the 0–1 multidimensional knapsack problem (MKP) is a classical multi-constraint complicated combinatorial optimization problem with numerous applications. In this paper, we present a hybrid learning MS (HLMS) by incorporating two learning mechanisms, global-best harmony search (GHS) learning and Baldwinian learning for solving MKP. (1) GHS learning guides moth individuals to search for more valuable space and the potential dimensional learning uses the difference between two random dimensions to generate a large jump. (2) Baldwinian learning guides moth individuals to change the search space by making full use of the beneficial information of other individuals. Hence, GHS learning mainly provides global exploration and Baldwinian learning works for local exploitation. We demonstrate the competitiveness and effectiveness of the proposed HLMS by conducting extensive experiments on 87 benchmark instances. The experimental results show that the proposed HLMS has better or at least competitive performance against the original MS and some other state-of-the-art metaheuristic algorithms. In addition, the parameter sensitivity of Baldwinian learning is analyzed and two important components of HLMS are investigated to understand their impacts on the performance of the proposed algorithm. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

24 pages, 1341 KiB  
Article
Active Debris Removal Mission Planning Method Based on Machine Learning
by Yingjie Xu, Xiaolu Liu, Renjie He, Yuehe Zhu, Yahui Zuo and Lei He
Mathematics 2023, 11(6), 1419; https://doi.org/10.3390/math11061419 - 15 Mar 2023
Cited by 1 | Viewed by 1994
Abstract
To prevent the proliferation of space debris and stabilize the space environment, active debris removal (ADR) has increasingly gained public concern. Considering the complexity of space operations and the viability of ADR missions, it would be necessary to schedule the ADR process in [...] Read more.
To prevent the proliferation of space debris and stabilize the space environment, active debris removal (ADR) has increasingly gained public concern. Considering the complexity of space operations and the viability of ADR missions, it would be necessary to schedule the ADR process in order to remove as much debris as possible. This paper presents an active debris removal mission planning problem, devoted to generate an optimal debris removal plan to guide the mission process. According to the problem characteristics, a two-layer time-dependent traveling salesman problem(TSP) mathematical model is established, involving the debris removal sequence planning and the transfer trajectory planning. Subsequently, two main novel methods based on machine learning are proposed for the ADR mission planning problem, including a deep neural networks(DNN)-based estimation method for approximating the optimal velocity increments of perturbed multiple-impulse rendezvous and an reinforcement learning(RL)-based method for optimizing the sequence of debris removal and rendezvous time. Experimental results of different simulation scenarios have verified the effectiveness and superiority of the proposed method, indicating the good performance for solving the active debris removal mission planning problem. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

19 pages, 593 KiB  
Article
Handling Irregular Many-Objective Optimization Problems via Performing Local Searches on External Archives
by Lining Xing, Rui Wu, Jiaxing Chen and Jun Li
Mathematics 2023, 11(1), 10; https://doi.org/10.3390/math11010010 - 20 Dec 2022
Cited by 1 | Viewed by 1052
Abstract
Adaptive weight-vector adjustment has been explored to compensate for the weakness of the evolutionary many-objective algorithms based on decomposition in solving problems with irregular Pareto-optimal fronts. One essential issue is that the distribution of previously visited solutions likely mismatches the irregular Pareto-optimal front, [...] Read more.
Adaptive weight-vector adjustment has been explored to compensate for the weakness of the evolutionary many-objective algorithms based on decomposition in solving problems with irregular Pareto-optimal fronts. One essential issue is that the distribution of previously visited solutions likely mismatches the irregular Pareto-optimal front, and the weight vectors are misled towards inappropriate regions. The fact above motivated us to design a novel many-objective evolutionary algorithm by performing local searches on an external archive, namely, LSEA. Specifically, the LSEA contains a new selection mechanism without weight vectors to alleviate the adverse effects of inappropriate weight vectors, progressively improving both the convergence and diversity of the archive. The solutions in the archive also feed back the weight-vector adjustment. Moreover, the LSEA selects a solution with good diversity but relatively poor convergence from the archive and then perturbs the decision variables of the selected solution one by one to search for solutions with better diversity and convergence. At last, the LSEA is compared with five baseline algorithms in the context of 36 widely-used benchmarks with irregular Pareto-optimal fronts. The comparison results demonstrate the competitive performance of the LSEA, as it outperforms the five baselines on 22 benchmarks with respect to metric hypervolume. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

17 pages, 3723 KiB  
Article
Intelligent Generation of Cross Sections Using a Conditional Generative Adversarial Network and Application to Regional 3D Geological Modeling
by Xiangjin Ran, Linfu Xue, Xuejia Sang, Yao Pei and Yanyan Zhang
Mathematics 2022, 10(24), 4677; https://doi.org/10.3390/math10244677 - 09 Dec 2022
Cited by 4 | Viewed by 1154
Abstract
The cross section is the basic data for building 3D geological models. It is inefficient to draw a large number of cross sections to build an accurate model. This paper reports the use of multi-source and heterogeneous geological data, such as geological maps, [...] Read more.
The cross section is the basic data for building 3D geological models. It is inefficient to draw a large number of cross sections to build an accurate model. This paper reports the use of multi-source and heterogeneous geological data, such as geological maps, gravity and aeromagnetic data, by a conditional generative adversarial network (CGAN) and implements an intelligent generation method of cross sections to overcome the problem of inefficient modeling data based on CGAN. Intelligent generation of cross sections and 3D geological modeling are carried out in three different areas in Liaoning Province. The results show that: (a) the accuracy of the proposed method is higher than the GAN and Variational AutoEncoder (VAE) models, achieving 87%, 45% and 68%, respectively; (b) the 3D geological model constructed by the generated cross sections in our study is consistent with manual creation in terms of stratum continuity and thickness. This study suggests that the proposed method is significant for surmounting the difficulty in data processing involved in regional 3D geological modeling. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

15 pages, 1759 KiB  
Article
Generation of a Synthetic Database for the Optical Response of One-Dimensional Photonic Crystals Using Genetic Algorithms
by Cesar Isaza, Ivan Alonso Lujan-Cabrera, Ely Karina Anaya Rivera, Jose Amilcar Rizzo Sierra, Jonny Paul Zavala De Paz and Cristian Felipe Ramirez-Gutierrez
Mathematics 2022, 10(23), 4484; https://doi.org/10.3390/math10234484 - 28 Nov 2022
Cited by 1 | Viewed by 1212
Abstract
This work proposes an optimization tool based on genetic algorithms for the inverse design of photonic crystals. Based on target reflectance, the algorithm generates a population of chromosomes where the genes represent the thickness of a layer of a photonic crystal. Each layer [...] Read more.
This work proposes an optimization tool based on genetic algorithms for the inverse design of photonic crystals. Based on target reflectance, the algorithm generates a population of chromosomes where the genes represent the thickness of a layer of a photonic crystal. Each layer is independent of another. Therefore, the sequence obtained is a disordered configuration. In the genetic algorithm, two dielectric materials are first selected to generate the population. Throughout the simulation, the chromosomes are evaluated, crossed over, and mutated to find the best-fitted one based on an error function. The target reflectance was a perfect mirror in the visible region. As a result, it was found that obtaining photonic crystal configurations with a specific stop band with disordered arrangements is possible. The genetic information of the best-fitted individuals (layer sequence, optical response, and error) is stored in an h5 format. This method of generating artificial one-dimensional photonic crystal data can be used to train a neural network for solving the problem of the inverse design of any crystal with a specific optical response. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

24 pages, 3327 KiB  
Article
An Exact Algorithm for Multi-Task Large-Scale Inter-Satellite Routing Problem with Time Windows and Capacity Constraints
by Jinming Liu, Guoting Zhang, Lining Xing, Weihua Qi and Yingwu Chen
Mathematics 2022, 10(21), 3969; https://doi.org/10.3390/math10213969 - 26 Oct 2022
Cited by 2 | Viewed by 1217
Abstract
In the context of a low-orbit mega constellation network, we consider the large-scale inter-satellite routing problem with time windows and capacity constraints (ISRPTWC) with the goal of minimizing the total consumption cost, including transmission, resource consumption, and other environmentally impacted costs. Initially, we [...] Read more.
In the context of a low-orbit mega constellation network, we consider the large-scale inter-satellite routing problem with time windows and capacity constraints (ISRPTWC) with the goal of minimizing the total consumption cost, including transmission, resource consumption, and other environmentally impacted costs. Initially, we develop an integer linear programming model for ISRPTWC. However, a difficult issue when solving ISRPTWC is how to deal with complex time window constraints and how to reduce congestion and meet transmission capacity. Along this line, we construct a three-dimensional time-space state network aiming to comprehensively enumerate the satellite network state at any moment in time and a task transmission route at any given time and further propose a time-discretized multi-commodity network flow model for the ISRPTWC. Then, we adopt a dynamic programming algorithm to solve the single-task ISRPTWC. By utilizing a Lagrangian relaxation algorithm, the primal multi-task routing problem is decomposed into a sequence of single-task routing subproblems, with Lagrangian multipliers for individual task route nodes and links being updated by a subgradient method. Notably, we devise a novel idea for constructing the upper bound of the ISRPTWC. Finally, a case study using illustrative and real-world mega constellation networks is performed to demonstrate the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

21 pages, 6134 KiB  
Article
Modeling and Solving for Multi-Satellite Cooperative Task Allocation Problem Based on Genetic Programming Method
by Weihua Qi, Wenyuan Yang, Lining Xing and Feng Yao
Mathematics 2022, 10(19), 3608; https://doi.org/10.3390/math10193608 - 02 Oct 2022
Cited by 2 | Viewed by 1497
Abstract
The past decade has seen an increase in the number of satellites in orbit and in highly dynamic satellite requests, making the control by ground stations inefficient. The traditional management composed of ground planning with separate onboard execution is seriously lagging in response [...] Read more.
The past decade has seen an increase in the number of satellites in orbit and in highly dynamic satellite requests, making the control by ground stations inefficient. The traditional management composed of ground planning with separate onboard execution is seriously lagging in response to dynamically incoming tasks. To meet the demand for the real-time response to emergent events, a multi-autonomous-satellite system with a central-distributed collaborative architecture was formulated by an integer programming model. Based on the structure, evolutionary rules were proposed to solve this problem by the use of sequence solution construction and a constructed heuristic method based on gene expression programming evolution. First, the features of the problem are extracted based on domain knowledge, then, the problem-solving rules are evolved by gene expression programming. The simulation results reflect that the evolutionary rule completely surpasses the three types of heuristic rules with adaptive mechanisms and achieves a solution effect close to meta-heuristic algorithms with a reasonably fast solving speed. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

22 pages, 2742 KiB  
Article
High-Order Sliding Mode Control for Three-Joint Rigid Manipulators Based on an Improved Particle Swarm Optimization Neural Network
by Jin Zhang, Wenjun Meng, Yufeng Yin, Zhengnan Li, Lidong Ma and Weiqiang Liang
Mathematics 2022, 10(19), 3418; https://doi.org/10.3390/math10193418 - 20 Sep 2022
Cited by 6 | Viewed by 1293
Abstract
This paper presents a control method for the problem of trajectory jitter and poor tracking performance of the end of a three-joint rigid manipulator. The control is based on a high-order particle swarm optimization algorithm with an improved sliding mode control neural network. [...] Read more.
This paper presents a control method for the problem of trajectory jitter and poor tracking performance of the end of a three-joint rigid manipulator. The control is based on a high-order particle swarm optimization algorithm with an improved sliding mode control neural network. Although the sliding mode variable structure control has a certain degree of robustness, because of its own switching characteristics, chattering can occur in the later stage of the trajectory tracking of the manipulator end. Hence, on the basis of the high-order sliding mode control, the homogeneous continuous control law and super-twisting adaptive algorithm were added to further improve the robustness of the system. The radial basis function neural network was used to compensate the errors in the modeling process, and an adaptive law was designed to update the weights of the middle layer of the neural network. Furthermore, an improved particle swarm optimization algorithm was established and applied to optimize the parameters of the neural network, which improved the trajectory tracking of the manipulator end. Finally, MATLAB simulation results indicated the validity and superiority of the proposed control method compared with other sliding mode control algorithms. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

27 pages, 2187 KiB  
Article
Survey of Lévy Flight-Based Metaheuristics for Optimization
by Juan Li, Qing An, Hong Lei, Qian Deng and Gai-Ge Wang
Mathematics 2022, 10(15), 2785; https://doi.org/10.3390/math10152785 - 05 Aug 2022
Cited by 17 | Viewed by 1890
Abstract
Lévy flight is a random walk mechanism which can make large jumps at local locations with a high probability. The probability density distribution of Lévy flight was characterized by sharp peaks, asymmetry, and trailing. Its movement pattern alternated between frequent short-distance jumps and [...] Read more.
Lévy flight is a random walk mechanism which can make large jumps at local locations with a high probability. The probability density distribution of Lévy flight was characterized by sharp peaks, asymmetry, and trailing. Its movement pattern alternated between frequent short-distance jumps and occasional long-distance jumps, which can jump out of local optimal and expand the population search area. The metaheuristic algorithms are inspired by nature and applied to solve NP-hard problems. Lévy flight is used as an operator in the cuckoo algorithm, monarch butterfly optimization, and moth search algorithms. The superiority for the Lévy flight-based metaheuristic algorithms has been demonstrated in many benchmark problems and various application areas. A comprehensive survey of the Lévy flight-based metaheuristic algorithms is conducted in this paper. The research includes the following sections: statistical analysis about Lévy flight, metaheuristic algorithms with a Lévy flight operator, and classification of Lévy flight used in metaheuristic algorithms. The future insights and development direction in the area of Lévy flight are also discussed. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

17 pages, 1366 KiB  
Article
Neural Network Algorithm with Dropout Using Elite Selection
by Yong Wang, Kunzhao Wang and Gaige Wang
Mathematics 2022, 10(11), 1827; https://doi.org/10.3390/math10111827 - 26 May 2022
Cited by 1 | Viewed by 1313
Abstract
A neural network algorithm is a meta-heuristic algorithm inspired by an artificial neural network, which has a strong global search ability and can be used to solve global optimization problems. However, a neural network algorithm sometimes shows the disadvantage of slow convergence speed [...] Read more.
A neural network algorithm is a meta-heuristic algorithm inspired by an artificial neural network, which has a strong global search ability and can be used to solve global optimization problems. However, a neural network algorithm sometimes shows the disadvantage of slow convergence speed when solving some complex problems. In order to improve the convergence speed, this paper proposes the neural network algorithm with dropout using elite selection. In the neural network algorithm with dropout using elite selection, the neural network algorithm is viewed from the perspective of an evolutionary algorithm. In the crossover phase, the dropout strategy in the neural network is introduced: a certain proportion of the individuals who do not perform well are dropped and they do not participate in the crossover process to ensure the outstanding performance of the population. Additionally, in the selection stage, a certain proportion of the individuals of the previous generation with the best performance are retained and directly enter the next generation. In order to verify the effectiveness of the improved strategy, the neural network algorithm with dropout using elite selection is used on 18 well-known benchmark functions. The experimental results show that the introduced dropout strategy improves the optimization performance of the neural network algorithm. Moreover, the neural network algorithm with dropout using elite selection is compared with other meta-heuristic algorithms to illustrate it is a powerful algorithm in solving optimization problems. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

27 pages, 6337 KiB  
Article
Enhanced Brain Storm Optimization Algorithm Based on Modified Nelder–Mead and Elite Learning Mechanism
by Wei Li, Haonan Luo, Lei Wang, Qiaoyong Jiang and Qingzheng Xu
Mathematics 2022, 10(8), 1303; https://doi.org/10.3390/math10081303 - 14 Apr 2022
Cited by 6 | Viewed by 1455
Abstract
Brain storm optimization algorithm (BSO) is a popular swarm intelligence algorithm. A significant part of BSO is to divide the population into different clusters with the clustering strategy, and the blind disturbance operator is used to generate offspring. However, this mechanism is easy [...] Read more.
Brain storm optimization algorithm (BSO) is a popular swarm intelligence algorithm. A significant part of BSO is to divide the population into different clusters with the clustering strategy, and the blind disturbance operator is used to generate offspring. However, this mechanism is easy to lead to premature convergence due to lacking effective direction information. In this paper, an enhanced BSO algorithm based on modified Nelder–Mead and elite learning mechanism (BSONME) is proposed to improve the performance of BSO. In the proposed BSONEM algorithm, the modified Nelder–Mead method is used to explore the effective evolutionary direction. The elite learning mechanism is used to guide the population to exploit the promising region, and the reinitialization strategy is used to alleviate the population stagnation caused by individual homogenization. CEC2014 benchmark problems and two engineering management prediction problems are used to assess the performance of the proposed BSONEM algorithm. Experimental results and statistical analyses show that the proposed BSONEM algorithm is competitive compared with several popular improved BSO algorithms. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

35 pages, 2591 KiB  
Article
Individual Disturbance and Attraction Repulsion Strategy Enhanced Seagull Optimization for Engineering Design
by Helong Yu, Shimeng Qiao, Ali Asghar Heidari, Chunguang Bi and Huiling Chen
Mathematics 2022, 10(2), 276; https://doi.org/10.3390/math10020276 - 16 Jan 2022
Cited by 27 | Viewed by 2248
Abstract
The seagull optimization algorithm (SOA) is a novel swarm intelligence algorithm proposed in recent years. The algorithm has some defects in the search process. To overcome the problem of poor convergence accuracy and easy to fall into local optimality of seagull optimization algorithm, [...] Read more.
The seagull optimization algorithm (SOA) is a novel swarm intelligence algorithm proposed in recent years. The algorithm has some defects in the search process. To overcome the problem of poor convergence accuracy and easy to fall into local optimality of seagull optimization algorithm, this paper proposed a new variant SOA based on individual disturbance (ID) and attraction-repulsion (AR) strategy, called IDARSOA, which employed ID to enhance the ability to jump out of local optimum and adopted AR to increase the diversity of population and make the exploration of solution space more efficient. The effectiveness of the IDARSOA has been verified using representative comprehensive benchmark functions and six practical engineering optimization problems. The experimental results show that the proposed IDARSOA has the advantages of better convergence accuracy and a strong optimization ability than the original SOA. Full article
(This article belongs to the Special Issue Evolutionary Computation 2022)
Show Figures

Figure 1

Back to TopTop