New Trends in Learning-Based Techniques Hybridizing Bio-Inspired Optimization Algorithms

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 3306

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Engineering, Universidad de Valparaíso, Valparaíso 2362905, Chile
Interests: bio-inspired algorithms; optimization algorithms; machine/deep learning

E-Mail Website
Guest Editor
School of Computer Engineering, Pontificia Universidad Católica de Valparaíso, Valparaiso, Chile
Interests: discrete and continuous optimization; metaheuristics; machine learning; artificial intelligence; decision system
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Bio-inspired optimization algorithms belong to the field of artificial intelligence and study the behavior of natural phenomena to achieve efferent solutions in less time. Methods such as Genetic Algorithm, Differential Evolution, Particle Swarm Optimization, and Ant Colony System, among several others, are solvers devoted to tackling large instances of complex optimization problems. These techniques use a set of virtual agents that cooperate among themselves, sharing knowledge about the resolution process. Over recent decades, many works have been reported with excellent results. Nevertheless, these studies often do not consider the generated information during the run and focus mainly on the final result. In this context, a new trend has recently emerged: hybridizing bio-inspired optimization algorithms with intelligence mechanics coming from various areas. These techniques use data to detect patrons and boost local and global search procedures towards more promising zones. Approaches based on context and environment learning allow us to design and create reactive bio-inspired optimization algorithms able to auto-govern behavior through parameter self-tuning or reduce the space of solutions. This Special Issue is devoted to publishing high-quality papers that employ hybridization to solve complex engineering problems. Reviews on this topic are also welcome.

Dr. Rodrigo Olivares
Prof. Dr. Ricardo Soto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • bio-inspired optimization algorithms
  • metaheuristics
  • machine learning
  • optimization problems
  • hybrids algorithms
  • learning-based hybrid solvers

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 3614 KiB  
Article
An Improved Sparrow Search Algorithm for Global Optimization with Customization-Based Mechanism
by Zikai Wang, Xueyu Huang, Donglin Zhu, Changjun Zhou and Kerou He
Axioms 2023, 12(8), 767; https://doi.org/10.3390/axioms12080767 - 07 Aug 2023
Viewed by 827
Abstract
To solve the problems of the original sparrow search algorithm’s poor ability to jump out of local extremes and its insufficient ability to achieve global optimization, this paper simulates the different learning forms of students in each ranking segment in the class and [...] Read more.
To solve the problems of the original sparrow search algorithm’s poor ability to jump out of local extremes and its insufficient ability to achieve global optimization, this paper simulates the different learning forms of students in each ranking segment in the class and proposes a customized learning method (CLSSA) based on multi-role thinking. Firstly, cube chaos mapping is introduced in the initialization stage to increase the inherent randomness and rationality of the distribution. Then, an improved spiral predation mechanism is proposed for acquiring better exploitation. Moreover, a customized learning strategy is designed after the follower phase to balance exploration and exploitation. A boundary processing mechanism based on the full utilization of important location information is used to improve the rationality of boundary processing. The CLSSA is tested on 21 benchmark optimization problems, and its robustness is verified on 12 high-dimensional functions. In addition, comprehensive search capability is further proven on the CEC2017 test functions, and an intuitive ranking is given by Friedman's statistical results. Finally, three benchmark engineering optimization problems are utilized to verify the effectiveness of the CLSSA in solving practical problems. The comparative analysis shows that the CLSSA can significantly improve the quality of the solution and can be considered an excellent SSA variant. Full article
Show Figures

Figure 1

30 pages, 5815 KiB  
Article
Improved Whale Optimization Algorithm Based on Fusion Gravity Balance
by Chengtian Ouyang, Yongkang Gong, Donglin Zhu and Changjun Zhou
Axioms 2023, 12(7), 664; https://doi.org/10.3390/axioms12070664 - 04 Jul 2023
Cited by 1 | Viewed by 860
Abstract
In order to improve the shortcomings of the whale optimization algorithm (WOA) in dealing with optimization problems, and further improve the accuracy and stability of the WOA, we propose an enhanced regenerative whale optimization algorithm based on gravity balance (GWOA). In the initial [...] Read more.
In order to improve the shortcomings of the whale optimization algorithm (WOA) in dealing with optimization problems, and further improve the accuracy and stability of the WOA, we propose an enhanced regenerative whale optimization algorithm based on gravity balance (GWOA). In the initial stage, the nonlinear time-varying factor and inertia weight strategy are introduced to change the foraging trajectory and exploration range, which improves the search efficiency and diversity. In the random walk stage and the encircling stage, the excellent solutions are protected by the gravitational balance strategy to ensure the high quality of solution. In order to prevent the algorithm from rapidly converging to the local extreme value and failing to jump out, a regeneration mechanism is introduced to help the whale population escape from the local optimal value, and to help the whale population find a better solution within the search interval through reasonable position updating. Compared with six algorithms on 16 benchmark functions, the contribution values of each strategy and Wilcoxon rank sum test show that GWOA performs well in 30-dimensional and 100-dimensional test functions and in practical applications. In general, GWOA has better optimization ability. In each algorithm contribution experiment, compared with the WOA, the indexes of the strategies added in each stage were improved. Finally, GWOA is applied to robot path planning and three classical engineering problems, and the stability and applicability of GWOA are verified. Full article
Show Figures

Figure 1

23 pages, 1290 KiB  
Article
A Learning—Based Particle Swarm Optimizer for Solving Mathematical Combinatorial Problems
by Rodrigo Olivares, Ricardo Soto, Broderick Crawford, Víctor Ríos, Pablo Olivares, Camilo Ravelo, Sebastian Medina and Diego Nauduan
Axioms 2023, 12(7), 643; https://doi.org/10.3390/axioms12070643 - 28 Jun 2023
Cited by 2 | Viewed by 968
Abstract
This paper presents a set of adaptive parameter control methods through reinforcement learning for the particle swarm algorithm. The aim is to adjust the algorithm’s parameters during the run, to provide the metaheuristics with the ability to learn and adapt dynamically to the [...] Read more.
This paper presents a set of adaptive parameter control methods through reinforcement learning for the particle swarm algorithm. The aim is to adjust the algorithm’s parameters during the run, to provide the metaheuristics with the ability to learn and adapt dynamically to the problem and its context. The proposal integrates Q–Learning into the optimization algorithm for parameter control. The applied strategies include a shared Q–table, separate tables per parameter, and flexible state representation. The study was evaluated through various instances of the multidimensional knapsack problem belonging to the NP-hard class. It can be formulated as a mathematical combinatorial problem involving a set of items with multiple attributes or dimensions, aiming to maximize the total value or utility while respecting constraints on the total capacity or available resources. Experimental and statistical tests were carried out to compare the results obtained by each of these hybridizations, concluding that they can significantly improve the quality of the solutions found compared to the native version of the algorithm. Full article
Show Figures

Figure 1

Back to TopTop