Advanced Optimization Methods and Applications, 2nd Edition

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 4540

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, Faculty of Mathematics and Computer Science, Transilvania University of Brasov, 50003 Brasov, Romania
Interests: graphs theory; combinatorial optimization; network optimization; inverse problems and inverse optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Electrical Engineering and Computer Science Faculty, Transilvania University of Brasov, Eroilor, nr. 29, 500036 Brasov, Romania
Interests: photovoltaic systems; hybrid systems characterization; concentrated light systems; hybrid system reliability
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Electrical Engineering and Computer Science Faculty, Transilvania University of Brasov, Eroilor, nr. 29, 500036 Brasov, Romania
Interests: photovoltaic systems; hybrid systems; energy harvesting; modeling of the photovoltaic cells and panels
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

You are invited to submit papers related to all aspects of optimization algorithms, graph theory, network optimization, location problems, fuzzy optimization, optimal control, machine learning, artificial intelligence, and parallel programming, from both theoretical and applied perspectives.

Optimization methods are finding increasing applications in all domains and play an essential role in dealing with real-life problems. Algorithms for such problems are continuously being developed and improved to obtain higher-quality solutions within a reasonable timeframe. Metaheuristic methods inspired by the behavior of different populations of people or the behavior of swarms of animals or insects are currently used to solve optimization problems for which optimal solutions cannot be obtained using exact methods within a reasonable amount of time.

Dr. Adrian Deaconu
Prof. Dr. Petru Adrian Cotfas
Prof. Dr. Daniel Tudor Cotfas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optimization algorithms
  • heuristics and metaheuristics
  • approximation algorithms
  • network optimization
  • location problems
  • fuzzy optimization
  • combinatorial optimization
  • inverse optimization problems
  • robust optimization
  • graph theory
  • optimal control
  • forecasting
  • machine learning
  • artificial intelligence
  • parallel programming
  • mathematical optimization
  • optimization applications

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 339 KiB  
Article
A Conjugate Gradient Method: Quantum Spectral Polak–Ribiére–Polyak Approach for Unconstrained Optimization Problems
by Kin Keung Lai, Shashi Kant Mishra, Bhagwat Ram and Ravina Sharma
Mathematics 2023, 11(23), 4857; https://doi.org/10.3390/math11234857 - 03 Dec 2023
Viewed by 734
Abstract
Quantum computing is an emerging field that has had a significant impact on optimization. Among the diverse quantum algorithms, quantum gradient descent has become a prominent technique for solving unconstrained optimization (UO) problems. In this paper, we propose a quantum spectral Polak–Ribiére–Polyak (PRP) [...] Read more.
Quantum computing is an emerging field that has had a significant impact on optimization. Among the diverse quantum algorithms, quantum gradient descent has become a prominent technique for solving unconstrained optimization (UO) problems. In this paper, we propose a quantum spectral Polak–Ribiére–Polyak (PRP) conjugate gradient (CG) approach. The technique is considered as a generalization of the spectral PRP method which employs a q-gradient that approximates the classical gradient with quadratically better dependence on the quantum variable q. Additionally, the proposed method reduces to the classical variant as the quantum variable q approaches closer to 1. The quantum search direction always satisfies the sufficient descent condition and does not depend on any line search (LS). This approach is globally convergent with the standard Wolfe conditions without any convexity assumption. Numerical experiments are conducted and compared with the existing approach to demonstrate the improvement of the proposed strategy. Full article
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)
Show Figures

Figure 1

27 pages, 7236 KiB  
Article
Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization
by Sultan Almotairi, Elsayed Badr, Mustafa Abdul Salam and Alshimaa Dawood
Mathematics 2023, 11(19), 4181; https://doi.org/10.3390/math11194181 - 06 Oct 2023
Cited by 2 | Viewed by 866
Abstract
Harris Hawk Optimization (HHO) is a well-known nature-inspired metaheuristic model inspired by the distinctive foraging strategy and cooperative behavior of Harris Hawks. As with numerous other algorithms, HHO is susceptible to getting stuck in local optima and has a sluggish convergence rate. Several [...] Read more.
Harris Hawk Optimization (HHO) is a well-known nature-inspired metaheuristic model inspired by the distinctive foraging strategy and cooperative behavior of Harris Hawks. As with numerous other algorithms, HHO is susceptible to getting stuck in local optima and has a sluggish convergence rate. Several techniques have been proposed in the literature to improve the performance of metaheuristic algorithms (MAs) and to tackle their limitations. Chaos optimization strategies have been proposed for many years to enhance MAs. There are four distinct categories of Chaos strategies, including chaotic mapped initialization, randomness, iterations, and controlled parameters. This paper introduces SHHOIRC, a novel hybrid algorithm designed to enhance the efficiency of HHO. Self-adaptive Harris Hawk Optimization using three chaotic optimization methods (SHHOIRC) is the proposed algorithm. On 16 well-known benchmark functions, the proposed hybrid algorithm, authentic HHO, and five HHO variants are evaluated. The computational results and statistical analysis demonstrate that SHHOIRC exhibits notable similarities to other previously published algorithms. The proposed algorithm outperformed the other algorithms by 81.25%, compared to 18.75% for the prior algorithms, by obtaining the best average solutions for 13 benchmark functions. Furthermore, the proposed algorithm is tested on a real-life problem, which is the maximum coverage problem of Wireless Sensor Networks (WSNs), and compared with pure HHO, and two well-known algorithms, Grey Wolf Optimization (GWO) and Whale Optimization Algorithm (WOA). For the maximum coverage experiments, the proposed algorithm demonstrated superior performance, surpassing other algorithms by obtaining the best coverage rates of 95.4375% and 97.125% for experiments 1 and 2, respectively. Full article
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)
Show Figures

Figure 1

35 pages, 5175 KiB  
Article
Improving Wild Horse Optimizer: Integrating Multistrategy for Robust Performance across Multiple Engineering Problems and Evaluation Benchmarks
by Lei Chen, Yikai Zhao, Yunpeng Ma, Bingjie Zhao and Changzhou Feng
Mathematics 2023, 11(18), 3861; https://doi.org/10.3390/math11183861 - 10 Sep 2023
Cited by 2 | Viewed by 656
Abstract
In recent years, optimization problems have received extensive attention from researchers, and metaheuristic algorithms have been proposed and applied to solve complex optimization problems. The wild horse optimizer (WHO) is a new metaheuristic algorithm based on the social behavior of wild horses. Compared [...] Read more.
In recent years, optimization problems have received extensive attention from researchers, and metaheuristic algorithms have been proposed and applied to solve complex optimization problems. The wild horse optimizer (WHO) is a new metaheuristic algorithm based on the social behavior of wild horses. Compared with the popular metaheuristic algorithms, it has excellent performance in solving engineering problems. However, it still suffers from the problem of insufficient convergence accuracy and low exploration ability. This article presents an improved wild horse optimizer (I-WHO) with early warning and competition mechanisms to enhance the performance of the algorithm, which incorporates three strategies. First, the random operator is introduced to improve the adaptive parameters and the search accuracy of the algorithm. Second, an early warning strategy is proposed to improve the position update formula and increase the population diversity during grazing. Third, a competition selection mechanism is added, and the search agent position formula is updated to enhance the search accuracy of the multimodal search at the exploitation stage of the algorithm. In this article, 25 benchmark functions (Dim = 30, 60, 90, and 500) are tested, and the complexity of the I-WHO algorithm is analyzed. Meanwhile, it is compared with six popular metaheuristic algorithms, and it is verified by the Wilcoxon signed-rank test and four real-world engineering problems. The experimental results show that I-WHO has significantly improved search accuracy, showing preferable superiority and stability. Full article
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)
Show Figures

Figure 1

28 pages, 3859 KiB  
Article
Fault Prediction of Control Clusters Based on an Improved Arithmetic Optimization Algorithm and BP Neural Network
by Tao Xu, Zeng Gao and Yi Zhuang
Mathematics 2023, 11(13), 2891; https://doi.org/10.3390/math11132891 - 27 Jun 2023
Cited by 2 | Viewed by 722
Abstract
Higher accuracy in cluster failure prediction can ensure the long-term stable operation of cluster systems and effectively alleviate energy losses caused by system failures. Previous works have mostly employed BP neural networks (BPNNs) to predict system faults, but this approach suffers from reduced [...] Read more.
Higher accuracy in cluster failure prediction can ensure the long-term stable operation of cluster systems and effectively alleviate energy losses caused by system failures. Previous works have mostly employed BP neural networks (BPNNs) to predict system faults, but this approach suffers from reduced prediction accuracy due to the inappropriate initialization of weights and thresholds. To address these issues, this paper proposes an improved arithmetic optimization algorithm (AOA) to optimize the initial weights and thresholds in BPNNs. Specifically, we first introduced an improved AOA via multi-subpopulation and comprehensive learning strategies, called MCLAOA. This approach employed multi-subpopulations to effectively alleviate the poor global exploration performance caused by a single elite, and the comprehensive learning strategy enhanced the exploitation performance via information exchange among individuals. More importantly, a nonlinear strategy with a tangent function was designed to ensure a smooth balance and transition between exploration and exploitation. Secondly, the proposed MCLAOA was utilized to optimize the initial weights and thresholds of BPNNs in cluster fault prediction, which could enhance the accuracy of fault prediction models. Finally, the experimental results for 23 benchmark functions, CEC2020 benchmark problems, and two engineering examples demonstrated that the proposed MCLAOA outperformed other swarm intelligence algorithms. For the 23 benchmark functions, it improved the optimal solutions in 16 functions compared to the basic AOA. The proposed fault prediction model achieved comparable performance to other swarm-intelligence-based BPNN models. Compared to basic BPNNs and AOA-BPNNs, the MCLAOA-BPNN showed improvements of 2.0538 and 0.8762 in terms of mean absolute percentage error, respectively. Full article
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)
Show Figures

Figure 1

30 pages, 5928 KiB  
Article
Biogeography-Based Teaching Learning-Based Optimization Algorithm for Identifying One-Diode, Two-Diode and Three-Diode Models of Photovoltaic Cell and Module
by Nawal Rai, Amel Abbadi, Fethia Hamidia, Nadia Douifi, Bdereddin Abdul Samad and Khalid Yahya
Mathematics 2023, 11(8), 1861; https://doi.org/10.3390/math11081861 - 14 Apr 2023
Cited by 4 | Viewed by 1112
Abstract
This article handles the challenging problem of identifying the unknown parameters of solar cell three models on one hand and of photovoltaic module three models on the other hand. This challenge serves as the basis for fault detection, control, and modelling of PV [...] Read more.
This article handles the challenging problem of identifying the unknown parameters of solar cell three models on one hand and of photovoltaic module three models on the other hand. This challenge serves as the basis for fault detection, control, and modelling of PV systems. An accurate model of PV is essential for the simulation research of PV systems, where it has a significant role in the dynamic study of these systems. The mathematical models of the PV cell and module have nonlinear I-V and P-V characteristics with many undefined parameters. In this paper, this identification problem is solved as an optimization problem based on metaheuristic optimization algorithms. These algorithms use root mean square error (RMSE) between the calculated and the measured current as an objective function. A new metaheuristic amalgamation algorithm, namely biogeography-based teaching learning-based optimization (BB-TLBO) is proposed. This algorithm is a hybridization of two algorithms, the first one is called BBO (biogeography-based optimization) and the second is TLBO (teaching learning-based optimization). The BB-TLBO is proposed to identify the unknown parameters of one, two and three-diode models of the RTC France silicon solar cell and of the commercial photovoltaic solar module monocrystalline STM6-40/36, taking into account the performance indices: high precision, more reliability, short execution time and high convergence speed. This identification is carried out using experimental data from the RTC France silicon solar cell and the STM6-40/36 photovoltaic module. The efficiency of BB-TLBO is checked by comparing its identification results with its own single algorithm BBO, TLBO and newly introduced hybrid algorithms such as DOLADE, LAPSO and others. The results reveal that the suggested approach surpasses all compared algorithms in terms of RMSE (RMSE min, RMSE mean and RMSE max), standard deviation of RMSE values (STD), CPU (execution time), and convergence speed. Full article
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)
Show Figures

Figure 1

Back to TopTop