Next Article in Journal
Reinforcement-Learning-Based Level Controller for Separator Drum Unit in Refinery System
Previous Article in Journal
A Comparative Evaluation of Self-Attention Mechanism with ConvLSTM Model for Global Aerosol Time Series Forecasting
Previous Article in Special Issue
Mountaineering Team-Based Optimization: A Novel Human-Based Metaheuristic Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Levy Flight-Based Improved Grey Wolf Optimization: A Solution for Various Engineering Problems

1
Department of Electrical Engineering, Lovely Professional University, Jalandhar 144411, India
2
Department of Computer Science and Engineering, School of Software Convergence, Sejong University, Seoul 05006, Republic of Korea
3
Department of Electronic Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1745; https://doi.org/10.3390/math11071745
Submission received: 9 March 2023 / Revised: 31 March 2023 / Accepted: 3 April 2023 / Published: 5 April 2023
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)

Abstract

:
Optimization is a broad field for researchers to develop new algorithms for solving various types of problems. There are various popular techniques being worked on for improvement. Grey wolf optimization (GWO) is one such algorithm because it is efficient, simple to use, and easy to implement. However, GWO has several drawbacks as it is stuck in local optima, has a low convergence rate, and has poor exploration. Several attempts have been made recently to overcome these drawbacks. This paper discusses some strategies that can be applied to GWO to overcome its drawbacks. This article proposes a novel algorithm to enhance the convergence rate, which was poor in GWO, and it is also compared with the other optimization algorithms. GWO also has the limitation of becoming stuck in local optima when used in complex functions or in a large search space, so these issues are further addressed. The most remarkable factor is that GWO purely depends on the initialization constraints such as population size and wolf initial positions. This study demonstrates the improved position of the wolf by applying strategies with the same population size. As a result, this novel algorithm has enhanced its exploration capability compared to other algorithms presented, and statistical results are also presented to demonstrate its superiority.

1. Introduction

In the last few decades, the importance of swarm intelligence and metaheuristic algorithms has grown in the optimization community. The population-based approach is called swarm intelligence, which converges to produce optimal outcomes. The output of the metaheuristic changes over time and is problem-specific.
Studying swarm intelligence involves looking at the behavior of a decentralized system. It involves a large number of search agents communicating with each other and their environment to accomplish a shared objective [1]. This interdisciplinary field draws on concepts of biology, computer science, and physics to understand how complex behavior can emerge from the interactions of many simple individuals. The study of swarm intelligence has many potential applications, including robotics, distributed computing, and optimization. The concept of swarm intelligence originated from the work of pioneering biologists, such as William Morton Wheeler, who studied the behavior of ants in the early 20th century [2]. However, it was not until the 1950s and 1960s that computer scientists and engineers explored the potential of a decentralized system for solving complicated issues. Gerardo Beni and Jing Wang originally used the term “swarm intelligence” in 1989 to describe the collective behavior of simple agents [3]. Swarm intelligence can be observed in a broad area of a natural system, such as in ant colonies, bird flocks, and fish schools [4]. Ant colonies, for example, can efficiently search for food and build complex nests through the interactions of individual ants. Bird flocks can perform coordinated maneuvers, such as turning in unison, through the interactions of individual birds. Fish schools can evade predators and locate food through the interactions of the individual fish.
Metaheuristic algorithms are classified into two classes, i.e., population-based and single solution-based. Optimization algorithms, known as population-based metaheuristic algorithms, use the population of a potential solution to iteratively search for the best response [5]. These algorithms frequently draw their inspiration from organic phenomena, including evolution, swarm behavior, and immune systems. Some of the population-based metaheuristic algorithms are the genetic algorithm (GA) [6], particle swarm optimization (PSO) [7], ant colony optimization (ACO) [8], differential evolution (DE) [9], and cultural algorithm (CA) [10]. Single solution-based metaheuristic algorithm start with a single candidate solution and then iteratively refine it by making small changes to it until the optimal solution is found or a stopping criterion is met. These algorithms often employ randomization and probabilistic search techniques to bypass local optima and completely investigate the search space. These algorithms frequently take their cues from physical processes such as annealing or movements in space. Some examples of single solution-based metaheuristic algorithms are simulated annealing (SA) [11], tabu search (TS) [12], and harmony search (HS) [13].
A class of optimization approaches known as metaheuristics is used to solve complex issues. They are called metaheuristics because they are a higher level of heuristics, meaning they are a set of heuristics that are used to guide the search for a solution [14]. These algorithms are known for their ability to find approximate solutions to problems that are difficult or impossible to solve exactly. They are extensively utilized in disciplines including operations research, computer science, and engineering. Metaheuristic algorithms first appeared in the 1940s and 1950s in the work of George Dantzig. Dantzig developed the simplex algorithm, which is still widely used today for solving linear programming problems [15]. However, metaheuristic algorithms are widely used in several fields. The most popular applications include:
  • Engineering: Metaheuristic algorithms are used in engineering to optimize the design of structures, such as bridges and buildings.
  • Computer science: Metaheuristic algorithms are used in computer science to optimize the performance of the algorithms and systems, such as scheduling and routing.
  • Operations research: Metaheuristic algorithms are used in operations research to solve problems in areas such as logistics and supply chain management.
This paper proposes a new metaheuristic algorithm, Levy flight-based improved grey wolf optimization [16], to overcome the limitation that was faced by grey wolf optimization. GWO has several drawbacks as it is stuck in local minima, has a low convergence rate, and has poor exploration. To overcome these drawbacks, IGWO, i.e., improved grey wolf optimization, was proposed recently, but even IGWO did not provide satisfactory results, especially for complex engineering problems. Therefore, an improved version of this technique, viz. IGWO, is presented in this article with statistical analysis proving its superiority over the most popular optimization techniques proposed recently.
Section 2 discusses what optimization is and the different techniques of optimization. Section 3 presents grey wolf optimization, improved grey wolf optimization, and the newly presented technique LF-IGWO is discussed. Section 4 implements the technique on a benchmark function, and the results are evaluated in comparison to other approaches. Section 5 presents the implementation of the engineering problem and 31-level inverter problem.

2. Optimization

Finding the optimum solution to a problem in a list of possible alternatives is the process of optimization [16]. It is a fundamental concept in fields such as engineering, computer science, and operations research. There are many different optimization techniques, each with its own special qualities and traits. A few well-known techniques include mathematical programming, gradient-based methods [17], and metaheuristic algorithms.
The work of mathematicians and engineers in the 19th century is where optimization first emerged. The first optimization methods were based on calculus and were used to solve problems in engineering and physics. However, optimization did not start to be used in other disciplines, including economics and computer science, until the 20th century.
There are many different optimization strategy types, each with unique characteristics. The most popular types include:
  • Mathematical programming: These techniques are based on the use of mathematical models to represent the problem and constraints. The solution is then found by solving the mathematical equations.
  • Gradient-based methods: These techniques are based on the use of gradients, which are the directions of the steepest ascent or descent. The solution is found by moving in the direction of the gradient until a local or global optimum is reached.
  • Metaheuristic algorithm: These techniques are based on the use of metaheuristics, which are rules of thumb, to guide the search for a solution. The solution is found by iteratively improving an initial solution over time.
Many researchers are moving toward the optimization field because of the term no free lunch (NFL). The term NFL means that no technique or algorithm is suitable for all kinds of problems [18]. Therefore, there is a wide area to work on as each problem may lead towards developing new techniques.
In this optimization field, nature-inspired based optimization techniques play a significant role. They are defined as problem-solving methodologies that take inspiration from nature or a natural process [19]. These nature-inspired optimization techniques are further classified, as shown in Figure 1, and a summary of nature-inspired algorithms is given in Table 1.

3. Optimization Algorithm and New Proposed Algorithms

The optimization algorithms used in swarm intelligence place a strong emphasis on the social relationships and interactions that occur between members of the swarms as they search for and pursue food sources. Several SI algorithms have been created and suggested over the past few years. One of the most well-known and frequently employed SI-based methods is Grey Wolf Optimization (GWO). The grey wolf’s natural behavior of looking for the most effective way to pursue prey served as the model for the GWO algorithm. This led to a good exploration–exploitation balance. A limitation of this is that it suffers from poor performance in global search. To eliminate this limitation, improved grey wolf optimization (IGWO) was proposed with the help of the dimension learning-based hunting (DLH) search strategy. The advantage of this is that GWO performs better in multi-dimensional functions but is not too efficient in uni-dimensional functions. To resolve this uni-dimensional issue, a new method is proposed in this paper, i.e., Levy flight-based improved grey wolf optimization (LF-IGWO). In this section, GWO, IGWO, and LF-IGWO are discussed.

3.1. Grey Wolf Optimization

The metaheuristic-based grey wolf optimization (GWO) technique was influenced by the communal exploration actions of grey wolves [36]. To identify a given problem’s global optimum, the program imitates the hierarchy of command and the hunting style of grey wolves. GWO has been employed to address several difficulties in optimization, including those involving machine learning, image processing, and function optimization.
There are three basic steps in the GWO algorithm: initialization, iteration, and update. The initial population is produced at random during the startup process. Every member of this population is given a fitness value during the iteration phase. In the update step, the individuals are updated based on their fitness values. Leaders are selected from those with the highest fitness rating, and the others follow them to update their positions.
One of the advantages of GWO is its simplicity. In contrast to other optimization algorithms, GWO does not need any settings to be pre-set. Additionally, compared to other methods, such as differential evolution (DE) and particle swarm optimization (PSO), GWO has been demonstrated to converge more quickly and create better solutions.
Numerous optimization issues, such as function optimization, image processing, and machine learning, have been tackled with GWO. In function optimization, GWO has been used to optimize benchmark functions. In image processing, GWO has been used for image enhancement, segmentation, and compression. In machine learning, GWO has been used for feature selection, classification, and clustering.
Despite its successes, GWO is not without its limitations. One limitation is that GWO is sensitive to the original population and could reach a local optimum if it is not sufficiently diversified. Another limitation is that GWO may not perform well on problems with a high number of variables or constraints.
Figure 2 shows the hierarchy level of the grey wolves. This is the population-based algorithm that signifies that there is a group of wolves and they are divided into different levels based on their work or task.
The dominating wolves in this situation are those from alpha packs, and it can be said that they are the group’s leaders. They give us the GWO algorithm’s best-fitting solution. These groups of wolves make choices regarding hunting, planning, and delegating specific tasks to other wolves in their pack [36]. These wolves are thought to be the strongest in their pack.
Beta is the group’s next level. They give us the value or solution that is best suited to our needs after the alpha. This group’s responsibility is to guide the alpha group’s decision-making and to rule the two groups that follow it.
Omega is the lowest category. They act out the part of the victim. Their solutions have no bearing on the GWO algorithm’s final result [37]. They are given their opportunities in the end of the assignment. For example, after hunting, they are permitted to eat last after all other wolves have finished, because they are at the bottom of the pack and must follow the order of their dominant wolf. Below is the pseudo code for GWO [37] in Algorithm 1:
Algorithm 1 GWO Algorithm
Form the grey wolf population.
Evaluate the accuracy of every response.
For each iteration:
For every single grey wolf in the population:
Generate a new solution.
Analyze the new solution’s potential.
Update a grey wolf’s location in accordance with the updated solution.
Sort the grey wolves based on their fitness.
Each grey wolf’s location will be updated dependent on where the other grey wolves in the population are.
Return the best solution as the result.

3.2. Improved Grey Wolf Optimization

In GWO, the omega wolves are led by the α, β, and γ wolves to the areas of the search space where it is most likely that the ideal solution will be found. This behavior could keep you in a locally ideal solution. A decline in population diversity, which causes GWO to enter the regional optimum, is another adverse effect. Improved grey wolf optimization was introduced to address this problem.
IGWO algorithms typically involve modifications to the original GWO algorithm in one or more areas such as initialization, updating steps, and adaptation.
In the initialization phase, IGWO algorithms may use a different method for initializing the population, such as using a combination of random initialization [6].
X i j = l j + r a n d j   0 , 1 × u j l j ,   i   1 , N ,   j 1 , D
Movement phase: An additional movement technique that is part of I-GWO is the dimension learning-based hunting (DLH) search method [38]. The wolves in DLH are aware of each other as possible candidates for the new role [39].
Dimension learning-based hunting (DLH) search strategy: Originally, in GWO, three pop leader wolves are used to produce a new position for each wolf. By doing this, the population loses diversity too soon, the GWO displays delayed convergence, and the wolves become caught in the local optimum. The suggested DLH search approach takes these flaws into account and includes individual wolf hunting that neighbors can observe.
In comparison to the original GWO algorithm, IGWO algorithms have been shown to have the following advantages in Algorithm 2:
Algorithm 2 IGWO Algorithms
  • Better convergence: IGWO algorithms have been found to converge faster and produce better solutions than the original GWO algorithm.
  • Better handling of constraints: IGWO algorithms have been found to perform better on problems with constraints.
  • Better handling of high-dimensional problems: IGWO algorithms have been found to perform better on problems with a vast array of variables.
The pseudo code is provided below [39]:
Set the grey wolf population (solutions).
Evaluate the accuracy of every response.
Arrange solutions in decreasing fitness order.
Make the best response the dominant α wolf.
Make the second best response the β wolf.
Make the third best response the δ wolf.
For each iteration:
Generate new solutions for each grey wolf.
Examine the appropriateness of each new solution.
According to the new answer, adjust each grey wolf’s location.
Based on a combination of the α wolf’s answer, the β wolf’s solution, and the δ wolf’s solution, the α wolf’s location should be updated.
Using the α wolf’s location and their solutions, adjust the positions of the wolves.
Evaluate the fitness of the updated solutions.
Arrange the grey wolves according to their most recent fitness.
Update the 𝑎, 𝛽, and δ wolves based on the new sorting.
Return the best solution (the alpha wolf) as the result.

3.3. Levy Flight Improved Grey Wolf Optimization

Levy flight is a sort of unplanned walk in which walkers’ step sizes are drawn from a Levy distribution, which is a probability distribution with heavy tails. This means that the distribution has a high probability of generating large steps, and these large steps occur more frequently than they would in a normal distribution [40].
The Levy distribution is characterized by a power law tail, and its probability density function can be expressed as:
f x = 1 2 π σ 2   e x p γ 2 σ 2   x μ   γ x μ 1 + γ
where:
  • μ is the parameter for location (the mean of the distribution);
  • σ is a scale parameter (the standard deviation of the distribution);
  • γ is the tail index parameter (controls the shape of the distribution).
The tail index parameter γ determines the shape of the distribution. When γ is between 0 and 2 [36], the distribution has infinite variance, which means that the variance of the distribution is undefined. When γ is greater than 2, the distribution has finite variance, and when γ is less than or equal to 1, the distribution has an infinite mean.
With the help of this technique, the random position is chosen and updated to improve the exploration capability. The equation for the finding the position of the wolf is:
s t e p = u . a b s v . 1 β
s t e p s i z e = 0.001 × s t e p . × s
s = s + s t e p s i z e . × r a n d n s i z e s
p o s i t i o n = s . × ~ F l a g u b + F l a g l b + u b . × F l a g u b + l b . × F l a g l b
With the help of these equations, the new position is initialized and given to the wolves. The algorithm updates each wolf’s position after each iteration using the following (Equation (7)) [41]:
For the alpha wolf:
x i t + 1 = x i t A j . D j
Similarly, the next position of β, δ, and ω is calculated using varying distance vector D. Here, xi(t) is the current position of the ith wolf at time t, xi(t + 1) is the updated position of the ith wolf at time t + 1, Aj is the scaling factor for the jth wolf, and Dj is the distance vector between the ith wolf and the jth wolf. The scaling factors Aj are updated in each iteration as follows:
A j = 2 × 1 u × a u 2
where u is a uniform random number in the range [0, 1], a is the current iteration number, and j corresponds to the alpha, beta, delta, and omega wolves.
The distance vector Dj is calculated as follows:
D j = C j × x j x i
where Cj is a random vector in the range [0, 1] that is generated for each wolf in each iteration.
The algorithm terminates when a stopping criterion is met, such as a maximum number of iterations or a minimum level of improvement in the objective function [42]. The pseudo code for the algorithm is given below in Algorithm 3:
Algorithm 3 Newly Proposed
Set the grey wolf population (solutions).
Evaluate the accuracy of every response.
Arrange solutions in decreasing fitness order.
Make the best response the dominant α wolf.
Make the second best response the β wolf.
Make the third best response the δ wolf.
Evaluate the new position of the wolves and update it with the initial position.
For each iteration:
Generate new solutions for each grey wolf using Levy flight.
Evaluate the accuracy of every new solution.
According to the new answer, adjust each grey wolf’s location.
The 𝛼 wolf’s location is updated based on a combination of its solution and the solutions of the 𝛽 and 𝛿 wolves.
Based on a combination of the α wolf’s answer, the β wolf’s solution, and the δ wolf’s solution, the α wolf’s location should be updated.
Evaluate the fitness of the updated solutions.
Arrange the grey wolves according to their most recent fitness.
Update the 𝛼, 𝛽, and 𝛿 wolves based on the new sorting.
Return the best solution (the 𝛼 wolf) as the result.

4. Results and Discussion

Through the resolution of the classical benchmark problem, the suggested approach is validated in this section. A total of 23 sets of well-known functions make up the classical benchmark function and the CEC 2017 benchmark functions. In this classical set of functions, F1 through F7 are uni-modal functions, F8–F13 are multi-modal functions, and the last ten functions, i.e., F14–F23, are fixed dimensional functions [43]. Detailed information on this classical benchmark function with dimension and range is given in Table 2, Table 3 and Table 4. The minima of these 23 traditional benchmark functions are aimed for by the novel algorithm that has been proposed. This is performed on other well-known algorithms such as PSO, GA, GWO, and I-GWO to compare with the LF-IGWO algorithm.
For a fair comparison with all these techniques, the size of the population used is 30 and each algorithm is run 20 times. Table 5 provides the specified parameters. Table 6 compares how these strategies’ means compare to one another when applied to the benchmark function. The best value among all the functions is highlighted in bold letters. The functions F1 to F7 are used to test the exploitation capabilities. Apart from functions F5 and F7, the new technique LF-IGWO performs better than other algorithms in these. From function F8 to F13, there are many local minima, so the technique should have exploration capability. Apart from F8, the new technique LF-IGWO performed better than other techniques in these. Additionally, from function F14 to F23, all the algorithms gave almost the same result, except for F14 and F15. Therefore, it can be said that the overall performance of the new technique is more outstanding than the others. The benchmark function model and results of these functions are depicted in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25.
The results of functions F1 to F7 are depicted in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, in which the newly proposed technique is represented in green, and one can infer from the convergence curves shown in the figures that the LF-IGWO has a good convergence rate and gives a better result than the IGWO and GWO. Therefore, the results indicate that the LF-IGWO has good exploitation capabilities.
With regard to the multi-model functions F8–F13 depicted in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, it seems that in function F9, GWO is better than IGWO and the newly proposed technique LF-IGWO. However, here, it seems that the new technique LF-IGWO has a good convergence rate compared to IGWO, which is shown in Figure 12. For functions 10 and 12, the convergence rates of IGWO and LF-IGWO are nearly the same, but LF-IGWO gives a somewhat better result at the end of 1000 iterations. From this, it can be concluded that the newly defined technique LF-IGWO has better exploration capabilities.
For the fixed dimension functions F14–F23, which are depicted in Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25, it seems that the convergence rate is almost same for the newly proposed technique LF-IGWO and IGWO. However, the final solution of GWO, IGWO, and LF-IGWO are the same at the end of the 1000 iterations.
Table 7 presents a comparison of the algorithms with the newly proposed algorithm, i.e., Levy flight-based improved grey wolf optimization, on the CEC 2017 benchmark functions. From the table, it can be clearly seen that it performs better than other popular algorithms when run with a population size of 30 each with 51 independent runs at 1000 iterations with a dimension of 30.

4.1. 31-Level Cascaded H-Bridge MLI

A multilevel cascaded H-bridge (CHB) inverter is a type of power electronic device that can generate high-voltage, high-quality AC waveforms by synthesizing the output of multiple low-voltage DC sources [44]. It is a popular choice for high-power applications including motor drives and renewable energy sources.
The basic building block of a CHB multilevel inverter is the H-bridge module. An H-bridge is a configuration of four switches (typically MOSFETs or IGBTs) that can control the polarity and magnitude of the output voltage across a load [45]. By cascading multiple H-bridge modules, it is possible to generate a staircase-like voltage waveform that approximates a sinusoidal waveform.
A CHB multilevel inverter can produce several output voltage levels by switching the H-bridge modules on and off in a specific pattern. For instance, a two-level CHB inverter can generate an output voltage that switches between two voltage levels, while a three-level CHB inverter can generate an output voltage that switches between three voltage levels.
A CHB multilevel general equation for the output voltage waveform can be written as:
V o t = 2 π   m = 1 1 m sin m ω t × k = 1 N V d c , k sin 2 k 1 m π 2 N    
where:
  • V o t is the output voltage waveform;
  • ω is the resulting waveform’s fundamental frequency;
  • N is the inverter’s H-bridge module count;
  • V d c , k is the DC voltage input of the kth H-bridge module.
In practice, the output waveform becomes closer to a sine wave as the number of voltage levels increases.
The output in the inverter is obtained by combining the outputs of the individual H-bridge circuits. This allows for a higher number of voltage levels to be generated, resulting in a more efficient and higher-quality output waveform. Figure 26 depicts the H-bridge circuit [36].
When switches S1 and S4 are closed (and S2 and S3 are open), there will be a positive voltage applied to the entire load. By turning OFF the S1 and S4 switches and turning ON the S2 and S3 switches, the voltage is reversed, permitting a negative voltage [46].
The S1 and S2 switches are never kept off at the same time though, as it may result in a short-circuiting of the supply of I/P voltage. The same caution should be utilized while using switches S3 and S4. The technical term for this situation is “shoot-through.”
As can be seen from the waveform in Figure 27, the CHB inverter can produce a waveform that closely approximates a sinusoidal waveform with low harmonic distortion. This makes it a suitable choice for high-power applications that require high-quality AC power.

4.2. Harmonics

In electrical engineering, harmonics refer to the sinusoidal components of an alternating current (AC) signal that have frequencies that are integer multiples of the fundamental frequency [47]. In most electrical power systems, the fundamental frequency, which is the lowest frequency included in an AC signal, is commonly 50 or 60 Hz. The presence of harmonics in an AC signal is caused by non-linear electrical loads, such as electronic devices and power electronic equipment, which generate distorted waveforms when they receive an AC signal [48]. Harmonics can have several negative impacts on electrical systems, such as:
  • Increased current levels: Harmonics can cause an increase in the RMS current levels, which can result in the overloading of conductors, transformers, and other electrical equipment.
  • Decreased power factor: A reduction in the power factor, a measurement of an electrical system’s efficiency, can be brought on by the presence of harmonics. This can result in increased energy costs and decreased system efficiency.
  • Increased heating: The additional current levels caused by harmonics can result in increased heating in conductors and transformers, which can reduce their life expectancy and cause safety issues.
  • Interference with communication systems: Harmonics can interfere with communication systems and cause problems such as data corruption and interference with radio and television signals.
Several measures can be employed to mitigate these issues, such as harmonic filters, active harmonic filters, and passive filters. These filters work by attenuating or filtering out the harmonic components from the AC signal, resulting in a more sinusoidal waveform with fewer harmonics. Other measures, such as the use of balanced loads and power electronic devices with low harmonic distortion, can also help to reduce the level of harmonics in electrical systems.

4.3. Total Harmonics Distortion

A measure of the amount of distortion present in an alternating current (AC) waveform is called total harmonic distortion (THD) [49]. This is defined as the proportion between the total of all the waveform’s harmonic components and the fundamental frequency component. The THD is usually expressed as a percentage and is used to characterize the quality of an AC signal.
Harmonic distortion in an AC waveform can be caused by non-linear loads, such as electronic devices and power electronic equipment, which generate distorted waveforms when they receive an AC signal [50]. These harmonics can have several negative impacts on electrical systems, such as increased current levels, decreased power factor, increased heating in conductors and transformers, and interference with communication systems.
The THD is an important parameter for characterizing the performance of electrical power systems, as well as electronic devices and power electronic equipment. For example, in power systems, a low THD indicates a high-quality waveform and a more efficient system, while a high THD indicates a distorted waveform and a less efficient system. Similarly, in electronic devices, a low THD indicates a high-quality signal and a more reliable device, while a high THD indicates a distorted signal and a less reliable device.
Several measures can be employed to reduce THD in electrical systems, such as harmonic filters, active harmonic filters, and passive filters. These filters work by attenuating or filtering out the harmonic components from the AC signal, resulting in a waveform that is more sinusoidal and has fewer harmonics. Other measures, such as the use of balanced loads and power electronic devices with low harmonic distortion, can also help to reduce THD. To eliminate the passive filter in order to simplify the circuit, optimization techniques have been used, which give a low THD value and make the circuit less complex. The equation for total harmonic distortion (THD) is:
T H D = sum   of   squares   of   all   harmonic   frequencies   except   the   fundamental   frequency 2 F u n d a m e n t a l   F r e q u e n c y × 100
where the fundamental frequency is the first harmonic, and the other harmonic frequencies are multiples of the fundamental frequency. Table 8 shows a comparison of the firing angles with the different optimization techniques. Additionally, Figure 28, Figure 29, Figure 30 and Figure 31 show the waveforms obtained from the different optimization techniques and THD spectrum. Table 9 shows a comparison of THD values.

5. Engineering Problems

5.1. Tension Compression Spring

A tension/compression spring’s weight must be reduced while taking into account several constraints, such as shear stress, surge frequency, and minimum deflection [35]. The diameter of the wire ( d ), the mean diameter of coil D , and the deciding variable are the number of active coils N [51]. Below is the mathematical formulation:
x = x 1 x 2 x 3 = d   D   N   t h a t   m i n i m i z e
f x = N + 2   D d 2
g 1 x = 71 , 785 x 1 4 x 2 2 x 3   0
g 2 x = 4 x 2 2 x 1 x 2 12 , 566 x 2 x 1 3 x 1 4 + 1 5108 x 1 2 1   0
g 3 x = x 2 3   x 3 140.45 x 1   0
g 4 x = x 1 + x 2 1.5   0
This issue was resolved using numerous metaheuristic algorithms. The method resolved the problem, and the results in Table 9 show how it performs in comparison to other algorithms.
The ideal weight according to Equation (13) is 0.01267, as determined by the LF IGWO algorithm. This outcome is very similar to the ideal value discovered by GWO and PSO. The ideal weight obtained by the LF IGWO method is less than that of other algorithms, such as GA, PSO, and I-GWO, and is equal to GWO, according to Table 10.

5.2. Pressure Vessel Design

In this instance, it is asserted that the optimization will result in a reduction in the overall price, which includes material price, overall expense in the welding process, and overall expense of constructing a cylindrical vessel. The decision-making factors are the cylindrical section’s length without taking into account the head ( L ), the thickness of the shell ( T s ), the height of the head ( T h ), and internal radius ( R ) [52]. There are four inequality constraints in this problem, three of which are linear and one of which is nonlinear. The following equations are the mathematical representation:
x = x 1 x 2 x 3   x 4 = T s T h   R   L     t h a t   m i n i m i z e
f x = 0.6224 T s R L + 1.7781 T h R 2 + 3.1661 T s 2 L + 19.84 T s 2 R
g 1 x = x 1 + 0.0193 x 3   0
g 2 x = x 2 + 0.0095 x 3   0
g 3 x = π x 3 2 x 4 4 3 π x 3 3 + 1 , 296 , 000   0
g 4 x = x 4 240 0
Metaheuristic algorithms were employed to address this issue. Table 11 compares the performance of the LF-IGWO algorithm with that of other techniques. It is clear from Table 11 that, in comparison to many other algorithms, the LF-IGWO method achieves the lowest cost calculated using Equation (19). The acquired cost is reasonably close to GA. GWO appears to perform better than others in this. However, LF-IGWO provides us with a superior outcome than GA, PSO, and IGWO.

5.3. Welded Beam Design

For a reduction in the cost of making a design, the optimization approach is put forward. The deciding elements are the width of the weld (h), the height of the bar (t), the length of the attached component of the bar (l), and the bar’s thickness (b) [53]. There are seven inequality restrictions [54]. The given problem has the following mathematical formulation:
x = x 1 x 2 x 3   x 4 = h   l   t   b   t h a t   m i n i m i z e
f x = 1.10471 h 2 l + 0.04811 t b   14.0 + l
g 1 x = τ x τ m a x   0
g 2 x = σ x σ m a x   0
g 3 x = δ x δ m a x   0
g 4 x = x 1 x 4   0
g 5 x = P P c x   0
g 6 x = 0.125 x 1 0
g 7 x = 1.10471 x 1 2 + 0.04811 x 3 x 4 14.0 + x 2 5.0   0
This design challenge was subjected to the LF-IGWO method, and the outcomes are displayed in Table 12. Table 11 depicts how the LF-IGWO algorithm outperforms other competing algorithms in minimizing the cost. It can be concluded that the LF-IGWO method performs better in some design challenges than other algorithms and is capable of tackling limited engineering design problems [55,56,57].
The statistical Wilcoxon rank sum test was performed on the newly proposed technique. The statistical data obtained by running the functions given in Table 2, Table 3 and Table 4 and the engineering problems for the different algorithms are depicted in Table 13. From this test, it was found that the data are not significant when compared to the engineering problem. There is a minor improvement in the result of the newly proposed Levy flight-based improved grey wolf optimization algorithm when it is compared with improved grey wolf optimization. It can be concluded that LF-IGWO is not significantly improved as compared to I-GWO. However, when it is compared with the benchmark function, it performs better than GWO, PSO, and GA, but when compared with the IGWO and SSR, it performs moderately.

6. Conclusions

A unique, metaphor-free metaheuristic algorithm is suggested in this study. Iteratively reducing the search space yields the best result. By utilizing this method, 23 common statistical benchmark functions are minimized. The outcomes demonstrate the strong exploration and exploitation capabilities of the suggested method. The LF-IGWO algorithm performs better than other algorithms, particularly in multimodal benchmark functions, based on the outcomes of the test functions. This highlights the capacity of the LF-IGWO algorithm to avoid local optima, which increases its competitiveness and supports the statements made in the research. One of the drawbacks that can be seen is that the algorithm becomes somewhat complex and hard to understand. As there is already a DLH strategy applied in the improved grey wolf optimization (IGWO) method, the addition of one more strategy to improve IGWO, which is Levy flight, makes it somewhat difficult for new researchers to understand.
Furthermore, three restricted engineering design issues are resolved using the LF-IGWO algorithm. The outcomes demonstrate that the LF-IGWO algorithm is also capable of solving limited design issues. Out of the three design challenges that were taken into consideration, according to performance comparison data, the LF-IGWO approach outperforms the other algorithms in two of them. Overall, it can be said that this method, despite having straightforward logic, may provide very competitive outcomes when compared to the well-known optimization techniques. Additionally, the suggested technique is used to solve the 31-level inverter for THD minimization to demonstrate its functionality. It can be seen from the results that the proposed algorithm gives us the appropriate firing angles that give us a total harmonic distortion (THD) value less than five, as per the standard of IEEE 519.

Author Contributions

Conceptualization, B.B.; Data curation, H.S. and K.A.; Funding acquisition, G.P.J. and B.S.; Investigation, G.P.J. and B.S.; Methodology, B.B. and H.S.; Project administration, B.S.; Resources, G.P.J. and B.S.; Software, B.B., H.S. and K.A.; Supervision, G.P.J. and B.S.; Validation, H.S., K.A. and B.S.; Visualization, H.S. and K.A.; Writing—original draft, K.A.; Writing—review and editing, B.B. and G.P.J. All authors have read and agreed to the published version of the manuscript.

Funding

The present research has been conducted by the Research Grant of Kwangwoon University in 2023.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Kaur, K.; Kumar, Y. Swarm Intelligence and its applications towards Various Computing: A Systematic Review. In Proceedings of the 2020 International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 17–19 June 2020; pp. 57–62. [Google Scholar] [CrossRef]
  2. Trianni, V.; Tuci, E.; Passino, K.M.; Marshall, J.A.R. Swarm Cognition: An interdisciplinary approach to the study of self-organising biological collectives. Swarm Intell. 2011, 5, 3–18. [Google Scholar] [CrossRef]
  3. Sneha; Wajahat. Swarm Intelligence. Int. J. Sci. Eng. Res. 2017, 8, 10. [Google Scholar]
  4. Hazem, A.; Glasgow, J. Swarm Intelligence: Concepts, Models and Applications; School of Computing, Queen’s University: Kingston, ON, Canada, 2012. [Google Scholar] [CrossRef]
  5. Zahra, B.; Siti Mariyam, S. A review of population-based meta-heuristic algorithm. Int. J. Adv. Soft Comput. Its Appl. 2013, 5, 1–35. [Google Scholar]
  6. Colin, R. Genetic Algorithms. In Handbook of Metaheuristics; Springer: Boston, MA, USA, 2010; pp. 109–139. [Google Scholar] [CrossRef]
  7. Seyedali, M. Particle Swarm Optimisation. In Evolutionary Algorithms and Neural Networks; Springer: Cham, Switzerland, 2019; pp. 15–31. [Google Scholar] [CrossRef]
  8. Yi, G.; Jin, M.; Zhou, Z. Research on a Novel Ant Colony Optimization Algorithm. In Advances in Neural Networks—Lecture Notes in Computer Science; Zhang, L., Lu, B.L., Kwok, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6063. [Google Scholar] [CrossRef]
  9. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  10. Ribeiro, M.d.R.; Aguiar, M.S.d. Cultural Algorithms: A Study of Concepts and Approaches. In Proceedings of the 2011 Workshop-School on Theoretical Computer Science, Pelotas, Brazil, 24–26 August 2011; pp. 145–148. [Google Scholar] [CrossRef]
  11. Rutenbar, R.A. Simulated annealing algorithms: An overview. IEEE Circuits Devices Mag. 1989, 5, 19–26. [Google Scholar] [CrossRef]
  12. Prajapati, V.K.; Jain, M.; Chouhan, L. Tabu Search Algorithm (TSA): A Comprehensive Survey. In Proceedings of the 2020 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things (ICETCE), Jaipur, India, 7–8 February 2020; pp. 1–8. [Google Scholar] [CrossRef]
  13. Zhang, J.; Zhang, P. A study on harmony search algorithm and applications. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 736–739. [Google Scholar] [CrossRef]
  14. Kytöjoki, J.; Nuortio, T.; Bräysy, O.; Gendreau, M. An efficient variable neighborhood search heuristic for very large scale vehicle routing problems. Comput. Oper. Res. 2007, 34, 2743–2757. [Google Scholar] [CrossRef]
  15. Mohammad, H.; Mohammad, H. Simplex method to Optimize Mathematical manipulation. Int. J. Recent Technol. Eng. (IJRTE) 2019, 7, 5. [Google Scholar]
  16. Alonso, G.; del Valle, E.; Ramirez, J.R. 5—Optimization methods. In Woodhead Publishing Series in Energy, Desalination in Nuclear Power Plants; Woodhead Publishing: Shaxton, UK, 2020; pp. 67–76. ISBN 9780128200216. [Google Scholar] [CrossRef]
  17. Lian, P.; Wang, C.; Xiang, B.; Shi, Y.; Xue, S. Gradient-based optimization method for producing a contoured beam with single-fed reflector antenna. J. Syst. Eng. Electron. 2019, 30, 22–29. [Google Scholar] [CrossRef]
  18. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  19. Shukla, P.; Singh, S.K.; Khamparia, A.; Goyal, A. Nature-inspired optimization techniques. In Nature-Inspired Optimization Algorithms; Academic Press: Cambridge, MA, USA, 2021. [Google Scholar] [CrossRef]
  20. Ingber, A.; Lester, I. Simulated annealing: Practice versus theory. Math. Comput. Model. 2002, 18, 29–57. [Google Scholar] [CrossRef] [Green Version]
  21. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  22. Anita; Yadav, A.; Kumar, N. Artificial electric field algorithm for engineering optimization problems. Expert Syst. Appl. 2020, 149, 113308. [Google Scholar] [CrossRef]
  23. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  24. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  25. Bansal, J.C.; Sharma, H.; Jadon, S.S. Artificial bee colony algorithm: A survey. Int. J. Adv. Intell. Paradig. 2013, 5, 123–159. [Google Scholar] [CrossRef]
  26. Neshat, M.; Sepidnam, G.; Sargolzaei, M.; Toosi, A.N. Artificial fish swarm algorithm: A survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev. 2014, 42, 965–997. [Google Scholar] [CrossRef]
  27. Guo, C.; Tang, H.; Niu, B.; Lee, C.B.P. A survey of bacterial foraging optimization. Neurocomputing 2021, 452, 728–746. [Google Scholar] [CrossRef]
  28. Zou, F.; Chen, D.; Xu, Q. A survey of teaching–learning-based optimization. Neurocomputing 2019, 335, 366–383. [Google Scholar] [CrossRef]
  29. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar] [CrossRef]
  30. Zhang, T.; Yang, C.; Zhao, X. Using Improved Brainstorm Optimization Algorithm for Hardware/Software Partitioning. Appl. Sci. 2019, 9, 866. [Google Scholar] [CrossRef] [Green Version]
  31. Abdelhamid, M.; Kamel, S.; Mohamed, M.A.; Aljohani, M.; Rahmann, C.; Mosaad, M.I. Political Optimization Algorithm for Optimal Coordination of Directional Overcurrent Relays. In Proceedings of the 2020 IEEE Electric Power and Energy Conference (EPEC), Edmonton, AB, Canada, 9–10 November 2020; pp. 1–7. [Google Scholar] [CrossRef]
  32. Huynh, N.T.; Nguyen, T.V.T.; Nguyen, Q.M. Optimum Design for the Magnification Mechanisms Employing Fuzzy Logic–ANFIS. Comput. Mater. Continua. 2022, 73, 5961–5983. [Google Scholar] [CrossRef]
  33. Kler, R.; Gangurde, R.; Elmirzaev, S.; Hossain, S.; Vo, N.V.T.; Nguyen, T.V.T.; Kumar, P.N. Optimization of Meat and Poultry Farm Inventory Stock Using Data Analytics for Green Supply Chain Network. Discret. Dyn. Nat. Soc. 2022, 2022, 8970549. [Google Scholar] [CrossRef]
  34. Huynh, T.T.; Nguyen, T.V.T.; Nguyen, Q.M.; Nguyen, T.K. Minimizing Warpage for Macro-Size Fused Deposition Modeling Parts. Comput. Mater. Contin. 2021, 68, 2913–2923. [Google Scholar] [CrossRef]
  35. Al-Khazraji, H. Optimal design of a proportional-derivative state feedback controller based on meta-heuristic optimization for a quarter car suspension system. Math. Model. Eng. Probl. 2022, 9, 437–442. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  37. Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
  38. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  39. Jia, H.; Sun, K.; Zhang, W.; Leng, X. An enhanced chimp optimization algorithm for continuous optimization domains. Complex Intell. Syst. 2022, 8, 65–82. [Google Scholar] [CrossRef]
  40. Gai, W.; Qu, C.; Liu, J.; Zhang, J. An improved grey wolf algorithm for global optimization. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 2494–2498. [Google Scholar] [CrossRef]
  41. Banaie-Dezfouli, M.; Nadimi-Shahraki, M.H.; Beheshti, Z. R-GWO: Representative-based grey wolf optimizer for solving engineering problems. Appl. Soft Comput. 2021, 106, 107328. [Google Scholar] [CrossRef]
  42. Kamaruzaman, A.F.; Zain, A.M.; Yusuf, S.M.; Udin, A. Levy Flight Algorithm for Optimization Problems—A Literature Review. Appl. Mech. Mater. 2013, 421, 496–501. [Google Scholar] [CrossRef]
  43. Li, J.; An, Q.; Lei, H.; Deng, Q.; Wang, G.-G. Survey of Lévy Flight-Based Metaheuristics for Optimization. Mathematics 2022, 10, 2785. [Google Scholar] [CrossRef]
  44. Mahesh, A.; Sushnigdha, G. A novel search space reduction optimization algorithm. Soft Comput. 2021, 25, 9455–9482. [Google Scholar] [CrossRef]
  45. Prasad, K.N.V.; Kumar, G.R.; Kiran, T.V.; Narayana, G.S. Comparison of different topologies of cascaded H-Bridge multilevel inverter. In Proceedings of the 2013 International Conference on Computer Communication and Informatics, Coimbatore, India, 4–6 January 2013; pp. 1–6. [Google Scholar] [CrossRef]
  46. Gaikwad, A.; Arbune, P.A. Study of cascaded H-Bridge multilevel inverter. In Proceedings of the 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT), Pune, India, 9–10 September 2016; pp. 179–182. [Google Scholar] [CrossRef]
  47. Krishna, R.A.; Suresh, L.P. A brief review on multi level inverter topologies. In Proceedings of the 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 18–19 March 2016; pp. 1–6. [Google Scholar] [CrossRef]
  48. Hassan, N.; Mohagheghian, I. A Particle Swarm Optimization algorithm for mixed variable nonlinear problems. IJE Trans. A Basics 2011, 24, 65–78. [Google Scholar]
  49. Firdoush, S.; Kriti, S.; Raj, A.; Singh, S.K. Reduction of Harmonics in Output Voltage of Inverter. Int. J. Eng. Res. Technol. (IJERT) 2016, 4, 1–6. [Google Scholar]
  50. Jacob, T.; Suresh, L.P. A review paper on the elimination of harmonics in multilevel inverters using bioinspired algorithms. In Proceedings of the 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 18–19 March 2016; pp. 1–8. [Google Scholar] [CrossRef]
  51. Mohd Radzi, M.Z.; Azizan, M.M.; Ismail, B. Observatory case study on total harmonic distortion in current at laboratory and office building. Phys. Conf. Ser. 2020, 1432, 012008. [Google Scholar] [CrossRef]
  52. BarathKumar, T.; Vijayadevi, A.; Brinda Dev, A.; Sivakami, P.S. Harmonic Reduction in Multilevel Inverter Using Particle Swarm Optimization. IJISET—Int. J. Innov. Sci. Eng. Technol. 2017, 4, 99–104. [Google Scholar]
  53. Nematollahi, A.F.; Rahiminejad, A.; Vahidi, B. A novel physical based meta-heuristic optimization method known as Lightning Attachment Procedure Optimization. Appl. Soft Comput. 2017, 59, 596–621. [Google Scholar] [CrossRef]
  54. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  55. Agushaka, J.O.; Ezugwu, A.E. Advanced arithmetic optimization algorithm for solving mechanical engineering design problems. PLoS ONE 2021, 16, e0255703. [Google Scholar] [CrossRef]
  56. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-Qaness, M.A.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  57. Boora, K.; Kumar, A.; Malhotra, I.; Kumar, V. Harmonic reduction using Particle Swarm Optimization based SHE Modulation Technique in Asymmetrical DC-AC Converter. Int. J. Electr. Comput. Eng. Syst. 2022, 13, 867–875. [Google Scholar] [CrossRef]
Figure 1. The hierarchical distribution of nature-inspired algorithms.
Figure 1. The hierarchical distribution of nature-inspired algorithms.
Mathematics 11 01745 g001
Figure 2. The hierarchy distribution of the grey wolf.
Figure 2. The hierarchy distribution of the grey wolf.
Mathematics 11 01745 g002
Figure 3. (a) Search space for unimodel function F01 (b) Comparison of convergence curve for F01.
Figure 3. (a) Search space for unimodel function F01 (b) Comparison of convergence curve for F01.
Mathematics 11 01745 g003
Figure 4. (a) Search space for unimodel function F02 (b) Comparison of convergence curve for F02.
Figure 4. (a) Search space for unimodel function F02 (b) Comparison of convergence curve for F02.
Mathematics 11 01745 g004
Figure 5. (a) Search space for unimodel function F03 (b) Comparison of convergence curve for F03.
Figure 5. (a) Search space for unimodel function F03 (b) Comparison of convergence curve for F03.
Mathematics 11 01745 g005
Figure 6. (a) Search space for unimodel function F04 (b) Comparison of convergence curve for F04.
Figure 6. (a) Search space for unimodel function F04 (b) Comparison of convergence curve for F04.
Mathematics 11 01745 g006
Figure 7. (a) Search space for unimodel function F05 (b) Comparison of convergence curve for F05.
Figure 7. (a) Search space for unimodel function F05 (b) Comparison of convergence curve for F05.
Mathematics 11 01745 g007
Figure 8. (a) Search space for unimodel function F06 (b) Comparison of convergence curve for F06.
Figure 8. (a) Search space for unimodel function F06 (b) Comparison of convergence curve for F06.
Mathematics 11 01745 g008
Figure 9. (a) Search space for unimodal function F07 (b) Comparison of convergence curve for F07.
Figure 9. (a) Search space for unimodal function F07 (b) Comparison of convergence curve for F07.
Mathematics 11 01745 g009
Figure 10. (a) Search space for multi model function F08 (b) Comparison of convergence curve for F08.
Figure 10. (a) Search space for multi model function F08 (b) Comparison of convergence curve for F08.
Mathematics 11 01745 g010
Figure 11. (a) Search space for multi model function F09 (b) Comparison of convergence curve for F09.
Figure 11. (a) Search space for multi model function F09 (b) Comparison of convergence curve for F09.
Mathematics 11 01745 g011
Figure 12. (a) Search space for multi model function F10 (b) Comparison of convergence curve for F10.
Figure 12. (a) Search space for multi model function F10 (b) Comparison of convergence curve for F10.
Mathematics 11 01745 g012
Figure 13. (a) Search space for multi model function F11 (b) Comparison of convergence curve for F11.
Figure 13. (a) Search space for multi model function F11 (b) Comparison of convergence curve for F11.
Mathematics 11 01745 g013
Figure 14. (a) Search space for multi model function F12 (b) Comparison of convergence curve for F12.
Figure 14. (a) Search space for multi model function F12 (b) Comparison of convergence curve for F12.
Mathematics 11 01745 g014
Figure 15. (a) Search space for multi model function F13 (b) Comparison of convergence curve for F13.
Figure 15. (a) Search space for multi model function F13 (b) Comparison of convergence curve for F13.
Mathematics 11 01745 g015
Figure 16. (a) Search space for fixed dimensional function F14 (b) Comparison of convergence curve for F14.
Figure 16. (a) Search space for fixed dimensional function F14 (b) Comparison of convergence curve for F14.
Mathematics 11 01745 g016
Figure 17. (a) Search space for fixed dimensional function F15 (b) Comparison of convergence curve for F15.
Figure 17. (a) Search space for fixed dimensional function F15 (b) Comparison of convergence curve for F15.
Mathematics 11 01745 g017
Figure 18. (a) Search space for fixed dimensional function F16 (b) Comparison of convergence curve for F16.
Figure 18. (a) Search space for fixed dimensional function F16 (b) Comparison of convergence curve for F16.
Mathematics 11 01745 g018
Figure 19. (a) Search space for fixed dimensional function F17 (b) Comparison of convergence curve for F17.
Figure 19. (a) Search space for fixed dimensional function F17 (b) Comparison of convergence curve for F17.
Mathematics 11 01745 g019
Figure 20. (a) Search space for fixed dimensional function F18 (b) Comparison of convergence curve for F18.
Figure 20. (a) Search space for fixed dimensional function F18 (b) Comparison of convergence curve for F18.
Mathematics 11 01745 g020
Figure 21. (a) Search space for fixed dimensional function F19 (b) Comparison of convergence curve for F19.
Figure 21. (a) Search space for fixed dimensional function F19 (b) Comparison of convergence curve for F19.
Mathematics 11 01745 g021
Figure 22. (a) Search space for fixed dimensional function F20 (b) Comparison of convergence curve for F20.
Figure 22. (a) Search space for fixed dimensional function F20 (b) Comparison of convergence curve for F20.
Mathematics 11 01745 g022
Figure 23. (a) Search space for fixed dimensional function F21 (b) Comparison of convergence curve for F21.
Figure 23. (a) Search space for fixed dimensional function F21 (b) Comparison of convergence curve for F21.
Mathematics 11 01745 g023
Figure 24. (a) Search space for fixed dimensional function F22 (b) Comparison of convergence curve for F22.
Figure 24. (a) Search space for fixed dimensional function F22 (b) Comparison of convergence curve for F22.
Mathematics 11 01745 g024
Figure 25. (a) Search space for fixed dimensional function F23 (b) Comparison of convergence curve for F23.
Figure 25. (a) Search space for fixed dimensional function F23 (b) Comparison of convergence curve for F23.
Mathematics 11 01745 g025
Figure 26. H-Bridge circuit model.
Figure 26. H-Bridge circuit model.
Mathematics 11 01745 g026
Figure 27. Cascaded H-bridge waveform.
Figure 27. Cascaded H-bridge waveform.
Mathematics 11 01745 g027
Figure 28. In this figure, (a) represents LF-IGWO waveform and (b) represents the THD spectrum of LF-IGWO.
Figure 28. In this figure, (a) represents LF-IGWO waveform and (b) represents the THD spectrum of LF-IGWO.
Mathematics 11 01745 g028
Figure 29. In this figure, (a) represents IGWO waveform and (b) represents the THD spectrum of IGWO.
Figure 29. In this figure, (a) represents IGWO waveform and (b) represents the THD spectrum of IGWO.
Mathematics 11 01745 g029
Figure 30. In this figure, (a) represents GWO waveform and (b) represents the THD spectrum of GWO.
Figure 30. In this figure, (a) represents GWO waveform and (b) represents the THD spectrum of GWO.
Mathematics 11 01745 g030
Figure 31. In this figure, (a) represents PSO waveform and (b) represents the THD spectrum of PSO.
Figure 31. In this figure, (a) represents PSO waveform and (b) represents the THD spectrum of PSO.
Mathematics 11 01745 g031
Table 1. Literature Review.
Table 1. Literature Review.
Optimization MethodsDescription
Simulated AnnealingA probabilistic technique is used to identify a function’s global optimum by iteratively perturbing a candidate solution and accepting it with a probability based on the temperature parameter. At each iteration, the algorithm compares the energy of the new approach and the existing solution and accepts the new solution if it is better than the current solution or with a probability that decreases with time [20].
Gravitational SearchThe law of gravity and the interaction of masses served as inspiration for this population-based optimization system. The algorithm uses a set of masses, which represent candidate solutions that are attracted or repelled by each other based on their positions and masses [21].
Artificial Electric FieldThe electrostatic force in physics served as the inspiration for this metaheuristic optimization approach. The algorithm represents each potential solution as a charged particle that interacts with other particles via the electrostatic force [22].
Sine Cosine AlgorithmThe sine and cosine functions in mathematics served as inspiration for this metaheuristic optimization approach. Each potential solution is represented by a position vector in the SCA algorithm’s high-dimensional search space. The technique creates random vectors that represent the search directions using the sine and cosine functions [23].
Equilibrium OptimizerIt is a metaheuristic optimization method that draws its motivation from physics’ equilibrium concept. Each potential solution is modeled by the algorithm as a particle that interacts with other particles via the forces of gravity and elastic deformation [24].
Artificial Bee ColonyIt is a metaheuristic optimization method that draws inspiration from honey bee feeding habits. Each bee in the ABC algorithm represents a potential solution to the optimization problem and represents a population of candidate solutions. Three different types of bees are used in the algorithm: working bees, observers, and scout bees [25].
Particle Swarm OptimizationIt is a metaheuristic optimization system that draws on social behavior cues from flocks of birds or schools of fish. Each potential solution is represented by the algorithm as a particle in a multidimensional search space. The particles move across the search space, modifying their positions and velocities in response to their own experiences as well as those of their nearby neighbors [7].
Ant Colony OptimizationIt is a metaheuristic optimization method that takes its cues from how ants forage. Each ant in the algorithm’s representation of a population of potential solutions as an ant colony stands for a potential solution to the optimization issue. The algorithm mimics the actions of ants as they look for food, with the food serving as the ideal answer to the issue [8].
Artificial Fish SwarmIt is a metaheuristic optimization technique that draws its inspiration from how fish forage. Each fish in the algorithm’s representation of a population of potential solutions is a potential solution to the optimization issue. The algorithm mimics the actions of fish as they swim and search for food, with the food serving as the ideal answer to the issue [26].
Bacterial Foraging OptimizationIt is a metaheuristic optimization algorithm that draws inspiration from how bacteria forage. Each bacterium in the algorithm’s model of a population of candidate solutions serves as a potential solution to the optimization problem. The algorithm mimics how bacteria scavenge for nutrition, with the nutrients serving as the ideal solution to the issue [27].
Harmony Search OptimizationIt is a metaheuristic algorithm that draws inspiration from the process of musical improvisation. The method searches for a function’s global optimum using a population-based approach. The algorithm selects elements from the already-existing solutions and randomly adds some randomness to them in order to produce a new harmony at each iteration. A memory-based method is also incorporated into the algorithm to speed up the search’s convergence [13].
Teaching Learning-based OptimizationIt is a metaheuristic optimization method that draws its inspiration from classroom teaching and learning procedures. Each student in the algorithm’s representation of a population of candidate solutions serves as a potential solution to the optimization problem. The algorithm mimics how students act as they interact with the teacher and one another and learn new things [28].
Imperialist Competition AlgorithmThe social rivalry and hierarchical organization principles serve as the foundation for this metaheuristic optimization method. A population of potential solutions is modeled by the algorithm as a collection of empires, where each empire consists of one imperialist and one or more colonies. The algorithm mimics how empires act as they compete and work together to increase their influence and power [29].
Brain Storm OptimizationIt is a metaheuristic optimization algorithm that draws inspiration from how human brain neurons behave. The programmer simulates the brainstorming process, in which a group of people come up with and assess solutions to a problem. Each person in BSO represents a potential solution to the optimization problem, and the algorithm adjusts each person’s position based on how they interact with other people [30].
Political OptimizerIt is a metaheuristic optimization method that draws inspiration from how politicians act in a given political system. The algorithm mimics the process of political rivalry, in which politicians face off against one another and work together to accomplish their objectives. The algorithm in PO adjusts the positions of the politicians depending on their interactions with other politicians in the population. Each possible solution to the optimization issue is represented in PO as a politician [31].
Differential EvolutionFor the purpose of resolving optimization issues, it is a stochastic optimization algorithm. DE is a population-based method that uses natural selection to gradually weed out suboptimal solutions from a population of candidate solutions. The fundamental strategy is to generate a population of potential solutions, referred to as individuals, and then develop them utilizing the three crucial operators of mutation, crossover, and selection [9].
Genetic AlgorithmIt is a metaheuristic optimization technique that draws inspiration from the evolution and natural selection processes. A population of potential solutions, or people, is created, and genetic operators such as crossover and mutation are used to gradually evolve them over generations. Every member of the population stands for a potential answer to the optimization issue [6].
Evolutionary StrategyIt draws inspiration from the course of evolution and natural selection. However, there are some significant ways in which ES is different from GA. The fundamental tenet of ES is to generate a population of potential solutions, referred to as individuals, and evolve them through mutation and selection across generations. Every member of the population is a potential answer to the optimization problem.
Evolutionary ProgrammingSimilar to evolutionary strategy (ES) and the genetic algorithm (GA), it is a family of optimization algorithms that draws its inspiration from the processes of natural selection and evolution. The fundamental tenet of EP is to generate a population of potential solutions, or individuals, and to use mutation and selection to gradually evolve them over generations. Every member of the population is a potential answer to the optimization problem.
Genetic ProgrammingIt is a machine learning technique that uses a form of evolutionary computation to automatically discover computer programs that solve a problem. GP is a variant of the genetic algorithm (GA) and evolutionary programming (EP), but instead of evolving vectors or individuals, it evolves computer programs represented as trees.
Optimization of Meat and Poultry Farm Inventory Stock Using Data Analytics for Green Supply Chain NetworkOptimizing inventory stock in meat and poultry farms is important for maintaining a sustainable and efficient supply chain network. Data analytics can be employed to analyze various factors that affect inventory levels, such as demand, production capacity, and supply chain lead time, to optimize inventory stock levels. The optimization process involves creating a model that considers various factors such as demand patterns, production schedules, and storage capacity. The model is trained using historical data on inventory levels, sales, and other relevant metrics. The trained model can then be used to predict the optimal inventory levels for each item in the meat and poultry farm [32].
Optimum Design for the Magnification Mechanisms Employing Fuzzy Logic–ANFISTo optimize the design of a centrifugal pump, fuzzy logic and ANFIS (Adaptive Neuro-Fuzzy Inference System) can be employed. Fuzzy logic is a mathematical technique that deals with uncertainty and imprecision in data and is commonly used in control systems. ANFIS is a type of fuzzy inference system that uses neural networks to model the fuzzy logic. The design process involves various parameters such as impeller diameter, number of blades, blade angle, and outlet diameter, which need to be optimized to achieve the desired performance [33].
Minimizing Warpage for Macro-sized Fused Deposition Modeling PartsThere are several methods to minimize warpage in macro-sized FDM parts. The first method is to optimize the design of the part. The design should be modified to avoid features that are susceptible to warping, such as sharp corners, thin walls, and unsupported overhangs. Additionally, the part should be designed with proper wall thickness and infill density to ensure structural integrity and dimensional stability. Minimizing warpage in macro-sized FDM parts involves optimizing the part design, printing process parameters, and support structures, as well as using a heated build platform. By implementing these methods, high-quality parts with minimal warpage can be achieved [34].
Optimal Switching Angle Scheme for a Cascaded H-Bridge Inverter using Pigeon-Inspired OptimizationA cascaded H-bridge inverter is a type of multilevel inverter that is widely used in high-power applications such as electric vehicles, renewable energy systems, and industrial motor drives. It is made up of several H-bridge modules connected in series to produce a stepped waveform output. Each H-bridge module is made up of four power switches (IGBTs or MOSFETs) and a DC voltage source. The switches are controlled with the help of firing angles to create a sinusoidal waveform output [35].
Table 2. Unimodal benchmark function equations.
Table 2. Unimodal benchmark function equations.
Function NameEquationRangeDim f m i n
F 1 x t = 1 n x i 2 [−100, 100]300
F 2 x t = 1 n x i + t = 1 n x i [−10, 10]300
F 3 x i = 1 n j 1 i x j 2 [−100, 100]300
F 4 x m a x i x i ,   1 i n [−100, 100]300
F 5 x t = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 [−30, 30]300
F 6 x i = 1 n x i + 0.5 2 [−100, 100]300
F 7 x i = 1 n i x i 4 + r a n d o m 0 , 1 [−1.28, 1.28]300
Table 3. Multimodal benchmark function equations.
Table 3. Multimodal benchmark function equations.
Function NameEquationRangeDim f m i n
F 8 x i = 1 n x i sin ( x i ) [−500, 500]30418.9829 ×D
F 9 x i = 1 n x i 2 10 cos 2 π x i + 10 [−5.12, 5.12]300
F 10 x 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e [−32, 32]300
F 11 x 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 [−600, 600]300
F 12 x π n { 10 sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 s i n 2 π y i + z + y n 1 2 } + i = 1 n u x i ,   10 ,   100 ,   4
y i = 1 + x i + 1 4 ,   u x i , a , k , m =             k x i a m ,   x i   > a               0   ,     a < x i < a k x i a m ,   x i < a
[−50, 50]300
F 13 x 0.1 s i n 2 3 π x 1 + i = 1 n x i 1 2 1 + s i n 2 3 π x i + 1 + x n 1 2 1 + s i n 2 2 π x n + i = 1 n u x i ,   5 ,   100 ,   4 [−50, 50]300
Table 4. Fixed dimensional benchmark function equations.
Table 4. Fixed dimensional benchmark function equations.
Function NameEquationRangeDim f m i n
F 14 x 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 [−65, 65]21
F 15 x i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 [−5, 5]40.00030
F 16 x 4 x 1 2 2.1 x 1 4 + 1 3   x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 [−5, 5]2−1.0316
F 17 x x 2 5.1 4 π 2   x 1 2 + 5 π   x 1 6 2 + 10 1 1 8 π cos x 1 + 10 [−5, 5]20.398
F 18 x 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 30 + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 [−2, 2]23
F 19 x i = 1 4 c i   e x p   j = 1 3 a i j x j p i j 2 [1, 3]3−3.86
F 20 x i = 1 4 c i   e x p   j = 1 6 a i j x j p i j 2 [0, 1]6−3.32
F 21 x i = 1 5 X a i X a i T + c i   1 [0, 10]4−10.1532
F 22 x i = 1 7 X a i X a i T + c i   1 [0, 10]4−10.4028
F 23 x i = 1 10 X a i X a i T + c i   1 [0, 10]4−10.5363
Table 5. Selection of parameters for different algorithms [35,41].
Table 5. Selection of parameters for different algorithms [35,41].
AlgorithmsParameters
LF-IGWOa linearly declines from 2 to 0
IGWOa linearly declines from 2 to 0
GWOa linearly declines from 2 to 0
PSOw decreases linearly from 0.9 to 0.2, and c1 = c2 = 2.
GA0.3 is the crossover probability, while 0.1 is the mutation probability
SSRRf = 15 and M = 5.
Table 6. Comparison of mean values obtained from classical benchmark functions.
Table 6. Comparison of mean values obtained from classical benchmark functions.
Function NumberPSOGAGWOIGWOSSRLF-IGWO
F13.80 × 10−82.31 × 1014.96 × 10−142.67 × 10−606.35 × 10−97.37 × 10−61
F24.80 × 10−81.07 × 1002.40 × 10−112.25 × 10−373.52 × 10−57.72 × 10−38
F31.53 × 1015.60 × 1031.86 × 101.08 × 10−101.82 × 10−19.07 × 10−15
F46.05 × 10−11.58 × 10−11.08 × 102.08 × 10−113.03 × 10−55.56 × 10−13
F56.03 × 1011.18 × 10128.9323.361.02 × 10222.65
F63.69 × 10−81.10 × 1034.296.98 × 10−66.29 × 10−93.65 × 10−6
F77.07 × 10−21.01 × 10−27.90 × 10−41.92 × 10−31.01 × 10−26.61 × 10−4
F8−6.06 × 102−2.09 × 103−6.07 × 103−5.58 × 103−6.84 × 103−1.02 × 104
F94.67 × 1036.59 × 10−123.30 × 102.54 × 1014.77 × 1017.96 × 100
F107.33 × 10−29.56 × 10−11.67 × 10−81.51 × 10−141.86 × 10−57.99 × 10−15
F119.28 × 10−34.88 × 10−16.89 × 10−1301.38 × 10−30
F127.26 × 10−31.11 × 10−16.42 × 10−12.67 × 10−71.45 × 10−111.96 × 10−7
F132.53 × 10−31.29 × 10−112.67 × 109.72 × 10−22.24 × 10−108.98 × 10−6
F143.46 × 1001.26 × 1007.87 × 1009.98 × 10−11.16 × 1009.98 × 10−1
F158.94 × 10−44.00 × 10−34.54 × 10−43.07 × 10−41.48 × 10−43.07 × 10−4
F16−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
F173.99 × 10−14.00 × 10−13.98 × 10−13.97 × 10−13.98 × 10−13.097 × 10−1
F183.00 × 1005.70 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
F19−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100
F20−3.27 × 100−3.31 × 100−3.24 × 100−3.32 × 100−3.32 × 100−3.32 × 100
F21−8.60 × 100−5.66 × 100−5.05 × 100−10.15 × 100−8.60 × 100−10.15 × 100
F22−9.07 × 100−7.34 × 100−10.39 × 100−10.40 × 100−1.04 × 101−10.40 × 100
F23−9.20 × 100−6.25 × 100−1.04 × 101−1.05 × 101−1.05 × 101−1.05 × 101
Table 7. Comparison of mean values obtained from CEC 2017 benchmark functions.
Table 7. Comparison of mean values obtained from CEC 2017 benchmark functions.
Function NumberPSOSSRGWOI-GWOLF-IGWO
F11.50 × 10111.11 × 10113.00 × 10101.59 × 10105.04 × 107
F22.64 × 1081.00 × 1092.84 × 1073.49 × 1064.68 × 105
F34.12 × 1061.51 × 1085.19 × 1048.59 × 1042.92 × 104
F44.53 × 1042.38 × 1045.30 × 1031.082 × 103491.677
F51.13 × 1031.00 × 103778.48787.30564.402
F6751.76699.66660.77646.87601.758
F73.37 × 1031.76 × 1031.22 × 1031.45 × 103805.982
F81.37 × 1031.28 × 1031.05 × 1031.068 × 103849.817
F94.47 × 1042.51 × 1045.48 × 1036.911 × 1031.07 × 103
F101.19 × 1041.08 × 1046.32 × 1037.529 × 1033.01 × 103
F115.02 × 1041.18 × 1044.44 × 1032.379 × 1031.31 × 103
F123.44 × 10101.66 × 10107.21 × 1093.58 × 1084.38 × 106
F133.82 × 10102.06 × 10102.89 × 1096.27 × 1072.68 × 104
F141.59 × 1082.59 × 1071.71 × 1045.87 × 1043.92 × 103
F158.30 × 1094.22 × 1091.94 × 1041.03 × 1071.11 × 104
F169.62 × 1031.30 × 1043.566 × 1033.184 × 1032.10 × 103
F171.56 × 1057.15 × 1032.269 × 1032.458 × 1031.78 × 103
F189.70 × 1082.56 × 1087.17 × 1051.05 × 1055.98 × 104
F197.79 × 1091.00 × 1091.77 × 1062.70 × 1072.13 × 104
F204.09 × 1034.11 × 1032.240 × 1032.484 × 1032.20 × 103
F212.98 × 1032.79 × 1032.577 × 1032.552 × 1032.36 × 103
F221.22 × 1041.29 × 1046.644 × 1034.186 × 1032.44 × 103
F234.27 × 1034.01 × 1033.207 × 1032.902 × 1032.70 × 103
F244.49 × 1034.41 × 1033.377 × 1033.071 × 1032.87 × 103
F252.35 × 1046.47 × 1034.228 × 1033.669 × 1032.94 × 103
F262.02 × 1041.32 × 1049.251 × 1035.105 × 1033.40 × 103
F274.90 × 1035.86 × 1033.798 × 1033.298 × 1033.20 × 103
F281.56 × 1041.04 × 1045.270 × 1033.686 × 1033.33 × 103
F296.16 × 1045.22 × 1045.208 × 1034.106 × 1033.56 × 103
F308.49 × 1093.25 × 1092.1 × 1081.86 × 1074.53 × 105
Table 8. Comparison of firing angles of LF-IGWO with other techniques.
Table 8. Comparison of firing angles of LF-IGWO with other techniques.
AnglesLF-IGWOIGWOGWOPSO
A10.036788−0.252220.20740.2675
A20.845030.731680−0.6905
A30.246460.135770.6342−0.3538
A40.94454−0.874210.08640.8315
A50.177880.180320.5053−0.3885
A60.69653−0.263650.04281.1988
A70.471660.337320.0575−0.7247
A80.295010.420870.1335−1.0523
A90.4211−0.44210.740.8072
A100.771930.51180.38370.9868
A111.2306−0.0867370.31830.2315
A120.56625−0.482010.5527−0.4606
A130.11742−0.0101850.85740.0215
A140.631860.0712220.2855−0.1755
A150.366310.675090.4286−0.3821
Table 9. Comparison of THD values of LF-IGWO with other techniques.
Table 9. Comparison of THD values of LF-IGWO with other techniques.
AlgorithmLF-IGWOIGWOGWOPSO
THD (%)4.5413.8114.685.92
Table 10. Comparison of the data values of LF-IGWO with other techniques.
Table 10. Comparison of the data values of LF-IGWO with other techniques.
AlgorithmsdD2NOp. Value
LF-IGWO0.05274850.36786510.6650.01267
IGWO0.05170290.35696411.28930.012681
GWO0.051690.356711.28880.012666
PSO0.0517280.35764411.244540.012674
GA0.051480.3516611.63220.012704
Table 11. Comparison of cost values obtained from LF-IGWO with other techniques.
Table 11. Comparison of cost values obtained from LF-IGWO with other techniques.
Algorithms T s T h R L Op. Value
LF-IGWO0.80214930.511371741.54997183.5736282.2292
GWO0.81250.434542.089181176.7587316051.5639
GA0.81250.434540.32392006288.7445
PSO0.8830440.53305345.38829190.06167865.233
IGWO0.90359070.531982744.20703154.07636793.5848
Table 12. Comparison of cost values obtained from LF-IGWO with other techniques.
Table 12. Comparison of cost values obtained from LF-IGWO with other techniques.
Algorithms T s T h R L Op. Value
LF-IGWO0.80214930.511371741.54997183.5736282.2292
GWO0.81250.434542.089181176.7587316051.5639
GA0.81250.434540.32392006288.7445
PSO0.8830440.53305345.38829190.06167865.233
IGWO0.90359070.531982744.20703154.07636793.5848
Table 13. Statistical data of functions from Table 2, Table 3 and Table 4 and engineering problem comparison with other algorithms.
Table 13. Statistical data of functions from Table 2, Table 3 and Table 4 and engineering problem comparison with other algorithms.
Function NumberPSOGAGWOIGWOSSR
F10.0015880.0001560.0031770.0270860.10499
F20.0045220.0001450.0031770.185770.10499
F30.0001450.0001450.00014510.10499
F40.0095240.0001450.0001450.379040.10499
F50.0001450.283780.0001450.0772720.10499
F60.0001450.0001450.0001450.270710.10499
F70.00012680.104990.0001450.482520.10499
F80.308150.379040.318150.283780.10499
F90.625270.699130.102210.982310.10499
F100.0002940.0001560.0001450.239850.10499
F110.0001560.0001560.0001450.464270.10499
F120.3518890.0001450.0001450.0635330.10499
F130.1684520.0001560.0001450.599690.10499
F140.0618370.018570.0714290.666670.66667
F150.624830.356840.476190.885710.4
F160.074670.0714290.0714290.666670.66667
F170.0714980.0714290.0714290.666670.66667
F180.0727950.666670.07142910.66667
F190.0694260.0615840.0714290.70.5
F200.495090.514550.484850.699130.28571
F210.00647320.00086160.0095240.685710.4
F220.0186540.026660.0095240.885710.4
F230.723640.54260.00952410.4
Tension Compression Spring0.75690.60.7142910.5
Welded Beam0.098520.8545540.476190.685711
Pressure Vessel0.098520.8545540.257140.685711
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bhatt, B.; Sharma, H.; Arora, K.; Joshi, G.P.; Shrestha, B. Levy Flight-Based Improved Grey Wolf Optimization: A Solution for Various Engineering Problems. Mathematics 2023, 11, 1745. https://doi.org/10.3390/math11071745

AMA Style

Bhatt B, Sharma H, Arora K, Joshi GP, Shrestha B. Levy Flight-Based Improved Grey Wolf Optimization: A Solution for Various Engineering Problems. Mathematics. 2023; 11(7):1745. https://doi.org/10.3390/math11071745

Chicago/Turabian Style

Bhatt, Bhargav, Himanshu Sharma, Krishan Arora, Gyanendra Prasad Joshi, and Bhanu Shrestha. 2023. "Levy Flight-Based Improved Grey Wolf Optimization: A Solution for Various Engineering Problems" Mathematics 11, no. 7: 1745. https://doi.org/10.3390/math11071745

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop