Next Article in Journal
Improving Mechanical Oscillator Cooling in a Double-Coupled Cavity Optomechanical System with an Optical Parametric Amplifier
Previous Article in Journal
Parameter Identification of Lithium-Ion Battery Model Based on African Vultures Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Flow Direction Algorithm for Engineering Optimization Problems

Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2217; https://doi.org/10.3390/math11092217
Submission received: 8 March 2023 / Revised: 21 April 2023 / Accepted: 26 April 2023 / Published: 8 May 2023

Abstract

:
Flow Direction Algorithm (FDA) has better searching performance than some traditional optimization algorithms. To give the basic Flow Direction Algorithm more effective searching ability and avoid multiple local minima under the searching space, and enable it to obtain better search results, an improved FDA based on the Lévy flight strategy and the self-renewable method (LSRFDA) was proposed in this paper. The Lévy flight strategy and the self-renewable approach were added to the basic Flow Direction Algorithm. Random parameters generated by the Lévy flight strategy can increase the algorithm’s diversity of feasible solutions in a short calculation time and greatly enhance the operational efficiency of the algorithm. The self-renewable method lets the algorithm quickly obtain a better possible solution and jump to the local solution space. Then, this paper tested different mathematical testing functions, including low-dimensional and high-dimensional functions, and the test results were compared with those of different algorithms. This paper includes iterative figures, box plots, and search paths to show the different performances of the LSRFDA. Finally, this paper calculated different engineering optimization problems. The test results show that the proposed algorithm in this paper has better searching ability and quicker searching speed than the basic Flow Direction Algorithm.

1. Introduction

Optimization refers to the design of existing schemes and parameters under given conditions so that a problem can obtain a more satisfactory answer. Optimization problems are often encountered in scientific research and production practice [1]. For a long time, scholars have conducted a lot of research on optimization problems and expanded optimization problems, making this a critical discipline. In the 17th century, Newton and Leibniz founded calculus and successfully solved a difficult problem, which can be regarded as a milestone in optimization theory. Then, the Lagrange multiplier method, the steepest descent method, the linear programming method, and the simplex method were proposed. These mathematical theories have been widely used in various engineering systems, economic systems, and social systems. However, these methods are aimed at particular problems and have specific requirements for their searching spaces. The objective function must be set as convex, continuously differentiable, and differentiable. With the development of science and technology, there are a large number of optimization problems to be solved in many practical application fields, such as the distribution center location problem, the layout optimization of factory production, the optimal allocation of equipment resources, the product scheduling problem, etc. [2,3,4].
These practical problems are generally large-scale, nonlinear, and multipolar, and many industrial optimization problems should be solved under complex variable conditions and over large search ranges. The traditional optimization method cannot carry out mathematical modeling. Therefore, exploring intelligent optimization methods for large-scale computing has become an important research direction in related disciplines. Therefore, industry and mathematics need strong computing power and high-precision optimization strategies to solve complex optimization problems, including nonlinear, multivariable, multi-constraint, and multi-dimensional problems. As an important branch of artificial intelligence fields, the intelligent optimization algorithm is a new evolutionary computing technology that has attracted more and more attention from scholars. The main idea of the intelligent optimization algorithm is to construct a searching algorithm according to natural characteristics. To create an intelligent optimization algorithm, metaheuristic optimization algorithms have been applied to mathematical methods to search for feasible solutions to linear and nonlinear function problems [5,6,7]. In recent years, intelligent optimization algorithms have been widely applied in different fields [8,9,10]. Many optimization algorithms have been designed by scholars to solve complex optimization problems, such as, Poor and Rich Optimization (PRO) [11], Beluga Whale Optimization (BWO) [12], Monarch Butterfly Optimization (MBO) [13], Student Psychology-Based Optimization (SPBO) [14], Jellyfish Search (JS) [15], etc. [16,17,18,19,20].
These algorithms can obtain acceptable solutions in a limited computing time and can obtain empirical information in the searching process. Additionally, optimization algorithms rely on optimization rules, which makes them less demanding in terms of mathematical properties. At the same time, due to the algorithms’ searching ability, they can significantly reduce searching times for large-scale problems. The Flow Direction Algorithm (FDA), which was proposed by Hojat Karami in 2021, is physics-based. FDA is inspired by the movement of water into the outflow in a catchment area. The FDA shows the flow direction to the outlet point with the lowest height in a drainage basin. The flow moves to the neighbor with the best objective function [21,22]. To further improve the searching ability, iteration speed, and jumping out power of the optimal local solution in the basic FDA, this paper proposed an improved FDA based on the Lévy flight strategy and the self-renewable method (LSRFDA). New coefficient factors based on the Lévy flight strategy and the self-renewable method were added to the basic FDA. These two methods enabled the LSFDA to find better feasible solutions and achieve a faster speed. To weaken the disadvantages of the standard FDA, this paper proposed an improved FDA based on the Lévy flight strategy and the self-renewable method (LSRFDA)., which could effectively improve its solving accuracy and the convergence performance of complex functions in spaces with different dimensions. Secondly, we introduced the LSRFDA to engineering optimization problems, and compared it with other algorithms to show that the model proposed in this paper has good robustness and generalization ability. The LSRFDA uses the Lévy flight strategy and the self-renewable method to strengthen its searching ability and iteration speed, and can jump out of the optimal local solution. In this paper, an improved FDA algorithm is proposed to solve engineering optimization problems. The LSRFDA focuses on improving upon some disadvantages of basic FDA in the searching process, and on solving engineering optimization problems; thus, it could provide new methods of solving engineering optimization problems. For function experiments, this paper applied different functions to test the proposed algorithm. Iteration curves, box plot charts, and search path figures were created, and the Wilcoxon rank sum test conducted. The rest of this paper is organized as follows: In Section 2, this paper introduces the basic FDA. In Section 3, this paper presents the proposed LSRFDA. In Section 4, the function experimental parameters, function experimental environments, numerical calculation results analysis, algorithm sub-sequence calculation results analysis, Wilcoxon rank sum test results analysis, iteration results analysis, box plot results analysis, and searching path results analysis are shown. In Section 5, the engineering optimization problems are given. In Section 6, the discussion is presented. In Section 7, the conclusion is given.

2. Flow Direction Algorithm

Concerning the FDA, the position of the initial flow is determined based on the following relationship:
Flow _ X ( i ) = lb + rand × ( ub lb )
where Flow_X(i) means the position of the i-th flow, lb and ub mean the lower and upper limits of the decision variables, and the parameter rand is in the range of [0, 1]. There are neighborhoods around each flow, and the neighbor flow position is created based on the following relationship:
N e i g h b o r _ X j = F l o w _ X i + r a n d n × Δ
where Neighbor_X(i) represents the j-th neighbor position, and randn is a random value with a normal distribution in the range of [0, 1].
Δ = rand × Xrand rand × Flow _ X i × Best _ X Flow _ X i × W
where the parameter rand is in the range of [0, 1], Best_X is the global optimal solution, and Xrand is a random position that is created based on the following relationship:
r d = m = 1 M R m f Δ t
where the parameters rd, Rm, Δt, and M mean the amount of direct runoff, rainfall, time interval, and the number of time steps, respectively.
W = 1 iter Max _ Iter 2 × randn × rand ¯ × iter Max _ Iter × rand ¯
where r a n d ¯ is a random vector with uniform distribution, iter means the current iteration, and Max_iter means the global iteration.
The following relationship is applied to complete the new position of the flow:
F l o w _ n e w X i = F l o w _ X i + V F l o w _ X i N e i g h b o r _ X j F l o w _ x i N e i g h b o r _ X j
where Flow_newX(i) shows the i-th new flow position.
V = randn × S o
S 0 i , j , d = F l o w _ f i t n e s s i N e i g h b o r _ f i t n e s s j F l o w _ x i , d N e i g h b o r _ f i t n e s s j , d
where Flow_fitness(i) and Neighbor_fitness(j) represent different values of the i-th flow and the j-th neighbor, respectively. The parameter d represents the problem dimension. Then, we generate a random integer number r.
The flow will move to the r-th flow if the fitness of the r-th flow is less than the fitness of the current flow. The following relationship shows how to mimic the flow direction in these conditions:
if   Flow _ fitness r < Flow _ fitness i Flow _ newX i = S L × Best _ X else Flow _ newX i = Flow _ X i + S L × Best _ X Flow _ X i end
The FDA’s iterative process is as follows:
Step 1. In the initialization phase, the algorithm randomly generates an initial population of flows, defines the objective function and its solution space, and finds the best objective function and the best solution. Set the maximum iterations Max_Iter. Set iter = 1.
Step 2. Create the neighbor flow position in Formula (2) and Δ in Formula (3) for each individual of the population or flows.
Step 3. Calculate each objective function value. Record the best neighbor solution value, and the best global optimal solution value in the current generation. If the best neighbor has a better objective function than the current flow, proceed to step 5. Otherwise, go to step 6.
Step 5. Find the new flow position according to Formula (6).
Step 6. Update the flow position according to Formula (9).
Step 7. Find the best objective function value and the best solution in the current generation. Replace the global solution and the global value if there is a better solution.
Step 8. Calculate iter = iter + 1. Judge whether iter equals Max_Iter. If not, return to step 2 and continue. Otherwise, stop the iteration.

3. The Proposed Algorithm

To obtain a better search result in the FDA, this paper introduces an improved FDA based on the Lévy flight strategy and the self-renewable method (LSRFDA). The Lévy flight strategy and the self-renewable method are added the original algorithm. In the proposed algorithm, iteration is optimized through mutual cooperation and a global information sharing mechanism. Global searching enables the algorithm to quickly find the optimal solution, while local searching enables the algorithm to thoroughly use the information around the optimal solution to find a better solution. The Lévy flight strategy was proposed by French mathematician Lévy according to non-Gaussian random moving in 1925. Lévy flight defines a scale-invariant walking model, which can be redefined by connecting the long gait with the small gait since the Lévy distribution is a non-Gaussian random process [23,24]. The probability density distribution of the Lévy flight strategy can be calculated based on the following relationship:
L s L = s L β
where sL is the Lévy flight length and β is the power law index, where 1 < β ≤ 3. The Lévy flight strategy step is determined according to a random probability, showing a dynamic motion process. The parameters can be calculated as follows:
s L = u v 1 β
where u~N (0, σ u 2 ), v~N (0, σ v 2 ), and σv = 1.
σ u = Γ 1 + β sin π β / 2 Γ 1 + β / 2 β 2 β 1 / 2 1 / β
In this paper, the Lévy flight strategy is the most important stage of the methodology. So, this paper shows a two-dimensional Lévy flight path and a three-dimensional Lévy flight path in Figure 1.
Thus, the Δ direction based on the Lévy flight strategy in this paper can be updated as follows:
Δ n e w = r a n d × X r a n d r a n d × F l o w _ X i × B e s t _ X F l o w _ X i × s L
This new V based on the Lévy flight strategy can be updated using (14).
V n e w = s L × S 0
F l o w _ n e w X i = F l o w _ X i + V n e w F l o w _ X i N e i g h b o r _ X j F l o w _ x i N e i g h b o r _ X j
The self-renewable method can enable the algorithm to quickly obtain a better feasible solution, and can conduct a more detailed search for a feasible solution. The relationship that illustrates how to simulate the flow direction can be updated as follows:
i f F l o w _ f i t n e s s r < F l o w _ f i t n e s s i F l o w _ n e w X i = s L × B e s t _ X e l s e F l o w _ n e w X i = F l o w _ X i + s L × B e s t _ X F l o w _ X i e n d
The LSRFDA steps are summarized in the pseudo-code shown in Algorithm 1.
Algorithm 1: LSRFDA
1: Input: Function f(.). Searching range. Max_Iter. Set iter = 1. Flows N, Neighbors M.
2: Initial optimum solution Best_X. lb, ub, β = 1.5.
3: Output: Best_X.
4: While (iter < Max_Iter)
5: For 1 i = 1:N
6: For 2 j = 1:M
7: sL = u/(|v|1/β)
8: Flow_X(j) = lb + rand × (ublb)
9: Δnew = (rand × Xrandrand × Flow_X(j)) × ||Best_X − Flow_X(j)|| × sL
10: Determining the best neighbor.
11: End For 2
12: If 1 the best neighbor has a better objective function than that of the current flow
13: Vnew = sL × So
14: Else
15: Generate random integer number r
16: If 2 Flow_fitness(r) < Flow_fitness(i)
17: Flow_newX(i) = sL × Best_X
18: Else
19: Flow_newX(i) = Flow_X(i) + sL × (Best_X − Flow_X(i))
20: End If 2
21: End If 1
22: Update Best_X if there is a better solution
23: End For 1
24:    iter = iter + 1
25: End While
The LSRFDA iterative process can be presented as follows:
Step 1. In the initialization phase, the algorithm randomly generates an initial population of flows, defines the objective function and its solution space, and finds the best objective function and best solution. Set the maximum number of iterations Max_Iter. Set iter = 1.
Step 2. Create the neighbor flow position, and the new Δ can be updated using Formula (13).
Step 3. Calculate each function value. Record the best neighbor solution value and the best global optimal solution value. If the best neighbor has a better objective function than that of the current flow, proceed to step 5, otherwise, jump to step 6.
Step 5. Calculate Formula (14) and find the new flow position according to Formula (15).
Step 6. Update the flow position according to Formula (16).
Step 7. Find the best objective function value and the best solution in the current generation. Replace the global solution and the global function value if there is a better solution.
Step 8. Calculate iter = iter + 1. Judge whether iter equals Max_Iter. If not, return to step 2 and continue. Otherwise, stop the iteration.
The main SRLFDA flow chart is shown in Figure 2.

4. Function Experiments

4.1. Testing Environments

To verify the searching ability of the proposed algorithm for solving complex functions with different dimensions, this paper carried out mathematical benchmark function experiments. Table 1 shows the low-dimensional and variable-dimensional benchmark functions, where f1 to f10 are low-dimensional functions and f11 to f16 are variable-dimensional functions.
In Table 1, D represents the dimension, fmin is the ideal optimal value, and Range is the searching scope. The original FDA literature already compared some intelligence algorithms, so this paper selects other algorithms for comparative experiments to avoid repeated and unnecessary experiments. The compared algorithms include Moth–Flame Optimization (MFO) [25], a Multi-Verse Optimizer (MVO) [26], Simulated Annealing (SA) [27], and basic FDA. MFO, which was proposed in 2015 by Seyedali Mirjalili, is based on the moth navigation method in nature called transverse orientation. The inspirations for MVO are based on three concepts, including the white hole, the black hole, and the wormhole. In MVO, r2, r3, and r4 are in the range of [0, 1]. SA is a probability algorithm derived from the solid annealing principle. SA includes two initial parameters, including initial temperature t0 and attenuation factor k. In this paper, t0 is set at 100, and k is set at 0.95. All the iterative processes and details of the compared algorithms can be found in the original algorithm literature. All the initial parameters of the compared algorithms were selected based on the original algorithm literature. This paper sets the population size of all the algorithms to 50 and sets the maximum iteration number of all the algorithms to 200. To obtain a fair result and remove randomness, all the algorithms were run independently 10 times. All the programs, data, and figures were accomplished in MATLAB (R2014b). The key feature parameters of all the algorithms used in this comparative study are shown in Table 2. Max_Iter is the maximum iteration number. N means the population size.

4.2. Numerical Calculation Results Analysis

To verify the effectiveness of the improved algorithm in this paper, this chapter selects the four indicators to comprehensively evaluate the competitiveness of the different algorithms. The three indicators include the highest searching value (Min), the lowest searching value (Max), and the average searching value (Ave). Table 3 shows the two-dimensional function results. Table 4 shows the high-dimensional function results (30/60/150). It can be seen from Table 3 and Table 4, that most of the calculated optimal values of the proposed algorithms in this paper are very close to the ideal optimal values in Table 1. For the Min indicator, the LSRFDA has all the best searching values of all the comparison algorithms. For the Max indicator and the Ave indicator, the LSRFDA has the best searching values across all the benchmark function results, except f10. In f10, the FDA has the best Max indicator and the best Ave indicator. From Table 3 and Table 4, it can be seen that except for f10, the LSRFDA proposed in this article is significantly superior to the other four comparison algorithms. For the f10 function, the search result is not the best value, and the maximum value and the average value are slightly worse than the FDA but are better than MFO, MVO, and SA. Although the searching accuracy of the proposed algorithm will decrease with an increase in the test function dimension, the optimization efficiency and calculation power of the proposed algorithm are always better than those of the other comparison algorithms. The search results show that the LSRFDA can not only obtain the best target, but also has strong searchability. The optimization accuracy of the original FDA is low in the searching process, and the individual population in the FDA quickly falls into the local optimal solution area in the searching space. Although the basic FDA can obtain better results in some benchmark functions, when the benchmark function dimension is increased, the solution accuracy of the feasible solution in the FDA is significantly reduced. The numerical results of benchmark functions of different dimensions show that the proposed algorithms in this paper can efficiently find the optimal value of the benchmark function in a multi-dimensional searching space. The improved algorithm has high searchability, strong detection accuracy, and a fast iteration speed.

4.3. Sub-Sequence Run Results Analysis

We conducted a basic statistical assessment of the results obtained in the sub-sequence runs of the algorithm. Radar charts of the algorithm’s 10 sub-sequence runs are shown in Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. A radar chart is a graphical method used to display multivariable data in the form of a two-dimensional chart on an axis from the same point. The relative position and the angle of an axis are usually uninform. A radar chart is also called a network map, a spider map, a star map, a polar coordinate map, and a Kiviat map. The method involves draw corresponding function value ratio lines in a radial form, starting from the center of a circle, in different regions. Then, by connecting the corresponding function value ratio lines with lines, an irregular closed-loop graph is formed. In this paper, the radar chart clearly shows the running results of the algorithm sub-sequences and shows differences in the algorithm sub-sequences. If the edge of the radar chart is wider, the accuracy of the algorithm operation is lower. For two-dimensional functions, SA sub-sequences have large radar charts, except for f3, f6, and f11(D=2). For high-dimensional functions (D = 30/60/150), MFO and SA sub-sequences have large radar charts. LSRFDA sub-sequences have the smallest radar charts of all the functions. The radar charts show that LSRFDA can improve the searching ability of the basic FDA for in terms of its global searching and local exploration capabilities, can avoid the algorithm getting stuck in the local optimal solution, and can jump out of the local optimal region, which can improve the performance and solution accuracy of the basic FDA.

4.4. Wilcoxon Rank Sum Test Results Analysis

In the case of arbitrary distribution, the mathematical analysis method often uses symbol testing methods to verify whether there is a significant difference in the distribution positions of the paired experimental data. However, the symbol testing method only considers positive and negative signs of differences, without considering absolute differences in differences, which can result in the partial loss of experimental information and inaccurate results. To avoid this flaw in the symbol testing method, this paper uses the Wilcoxon rank sum test. This method considers both the direction and magnitude of differences, making it more effective than symbol testing. Similar methods can also be used to test whether there are differences in the distribution positions of the group of experimental data. The Wilcoxon rank sum test is based on the rank sum of sample data. First, two samples are regarded as a single sample. Then, observations are ranked from small to large. If it is true to assume that the two independent samples are from the same population, the rank will be approximately distributed from the two samples. If it is true to assume that the two independent samples come from different populations, one will have smaller rank values. The other sample will have larger rank values, so a large rank sum will be obtained. The Wilcoxon rank sum test can give p values; if the p value is less than 0.05, there is a significant difference at a level of 0.05. To further compare the proposed algorithm with the other algorithms, the Wilcoxon rank sum test was used in this paper. All of the algorithm’s p values are given in Table 5, and NO means that the calculation results are not a number. For the FDA, the p values of f3, f5, f10, f11(D=2), and f16(D=2) are larger than 0.05. The other algorithms’ p values are less than 0.05. The Wilcoxon rank sum test shows that the LSRFDA has a large searching ability, which further shows that the LSRFDA has good searching performance. The LSRFDA performs significantly better than other comparison algorithms in solving different function problems. The Wilcoxon rank sum test results show that the range of optimal value fluctuation is very small, and the LSRFDA’s stability is strong. It can be seen that the proposed algorithm in this paper has good optimization performance for various typical functions, and has broad adaptability and strong robustness.

4.5. Iteration Results Analysis

Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the iterative results of different algorithms in benchmark functions of different dimensions. For all the functions, the iteration speed of the proposed algorithm is significantly faster than that of the basic algorithm. The LSRFDA can search on the left and right sides of the optimal value through the initial large searching step, and can skip a certain range of obstacles in the process. The convergence speed of the basic FDA algorithm is fast in the initial stage of iteration. Still, the individual population in the FDA will fall into the local optimal solution region and cannot jump out with the optimization iteration. The LSRFDA significantly improves the population diversity, meaning it can quickly locate the global optimal solution region and jump out of the local optimal area. It can be seen that the improved algorithm always quickly approaches the optimal value in the process of search optimization, and then, the LSRFDA skillfully avoids the local optimal region in the later optimization process. For the LSRFDA, the Lévy flight mechanism forces the search path to change continuously in the testing function. Additionally, the LSRFDA also uses the disturbance weight mechanism to find the global optimal value of the solution space with greater probability, and applies the multidirectional cross-search strategy to make the population drift randomly.

4.6. Box Plot Results Analysis

The box plot is a statistical figure used to show data dispersion information. It is mainly applied to show the distribution characteristics of original data and can also compare their distribution characteristics. In the plot, the box includes the highest value, the lowest value, the median value, the upper and lower quartiles, and the discrete value. Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17 are all box plots of different algorithms after 10 independent runs. For most benchmark functions, the LSRFDA has the narrowest box plot, the fewest outliers, the lowest median, and the closest upper and lower quartiles. If the box plot is a straight line, the algorithm has achieved the theoretical optimal value after 10 independent runs. From the above analysis results, it can be seen that the detection step size of individuals in the population affects the final solution accuracy of the algorithm. When the algorithm is in an unknown environment, the population must have a large detection step in the early detection stage to expand the initial searching range. The algorithm should have a small detection step in the late iteration stage for a more accurate and detailed local searching phase. Differently-colored bars represent box plot charts of different algorithms, and red dots represents discrete outlier data in Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17.

4.7. Search Path Results Analysis

To test the searching speed, the searching efficiency, and the searching accuracy of the LSRFDA, the LSRFDA search path and the original FDA search path are given. Figure 18 and Figure 19 show three-dimensional graphs of the benchmark functions and comparison figures of the search path between the LSRFDA and FDA in two-dimensional benchmark functions. The comparison figures are a search path refracted to the two-dimensional plane and a contour map in the two-dimensional plane. The red straight line is the LSRFDA search path. The green dashed line is the FDA search path. The blue origin in the figure is the theoretical optimal position. As can be seen from Figure 1, the FDA search path is larger than the LSRFDA search path in most search paths. The LSRFDA search path is larger than that of the FDA in f16. The LSRFDA search path is similar to that of the FDA in f3, f11, and f15. When using the LSRFDA and FDA for search path analysis, there is a significant difference in searching effectiveness. The LSRFDA has significant path advantages, and its planned path length is significantly reduced, indicating significant improvement compared to the basic FDA. After adopting the LSRFDA, during the initial search path, the LSRFDA uses a smaller field of view and fragmented step size to find paths, and can refine and adjust paths to improve their smoothness, which helps to reduce the search path’s length. This is due to the high random jumping characteristic of the LSRFDA, which makes it easy to jump from one region to another region. It can be seen from the figure that the LSRFDA search path cannot easily fall into the local optimal region in the searching process, and its solution accuracy is high, which shows that the LSRFDA can guide and restrict its performance through self-growth strategies to achieve the expected target effect. The search path experiment shows that the proposed algorithm has better solution accuracy and strong searching stability, and its solution quality is higher than that of the basic FDA algorithm. The proposed algorithm can find the optimal solution for the testing function under fewer search paths. Secondly, the proposed algorithm has a high initial global searching ability at the beginning of the search action. At the same time, it can have high solution accuracy and iteration speed.

5. Engineering Optimization Problems

5.1. The Three-Bar Truss Problem

The aim of the three-bar truss problem is to find the optimal value under different constraints including stress, bending, and buckling. This problem has two different decision variables, including the area of the three bars. Figure 20 shows the structure of the truss and the loads applied to the truss, arrows represent the direction of the force extension. In this figure, x1 = x3.
The three-bar truss problem can be formulated as follows:
M i n i m i z e f x = 2 2 x 1 + x 2 × l S u b j e c t t o : g 1 x = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 g 2 x = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 g 3 x = 1 2 x 2 + x 1 P σ 0 l = 100   c m ,   P = 2   KN / cm 2 ,   σ = 2   KN / cm 2
To solve the three-bar truss problem, the population size, number of neighbors, and number of iterations selected via sensitivity analysis are equal to 25, 3, and 200 in the basic FDA literature. In this paper, all the algorithm parameters were selected based on the basic FDA literature. The LSFDA search results were compared with different algorithms by considering 10 random runs [28,29,30,31,32,33]. The results of the highest and lowest values, the means, and the standard deviation are given in Table 6. The LSRFDA search result is better than that of the basic FDA. In comparison to this algorithm, the LSRFDA can find the same optimal solution.

5.2. The Tensile/Compression Spring Problem

The tensile/compression spring problem aims is to find the minimum spring weight under different constraints, including pressure, surge frequency, and deflection. In this problem, arrows represent the stretching direction, the three decision variables are the wire diameter (d), the mean spring coil diameter (D), and the number of active spring coils (P), which are denoted by x1, x2, x3. Figure 21 means the structure of the tensile/compression spring.
The tensile/compression spring problem can be formulated as follows:
M i n i m i z e f x = x 2 x 1 2 x 3 + 2 S u b j e c t t o : g 1 x = 1 x 2 3 x 3 / 71785 x 1 4 0 g 2 x = 4 x 2 2 x 1 x 2 / 12566 x 1 3 x 2 x 1 4 + 1 / 5108 x 1 2 1 0 g 3 x = 1 140.45 x 1 / x 2 2 x 3 0 g 4 x = x 2 + x 1 / 1.5 1 0 0.05 x 1 2.00 , 0.25 x 2 1.30 , 2.00 x 3 15.00
To solve the tensile/compression spring problem, the population size, number of neighbors, and number of iterations determined via sensitivity analysis are equal to 50, 1, and 200 in the basic FDA literature. In this paper, all the algorithm parameters were selected based on the basic FDA literature [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. The LSRFDA search results were compared with different algorithms by considering 10 random runs. In Table 7, NO means that the literature does give the value of the algorithm. The LSRFDA search’s highest value, mean value, and standard deviation are higher than those of the FDA’s search result, but the lowest value is lower than that of the FDA search result.

5.3. The Speed Reducer Design Problem

The aim of the speed reducer design problem is to find the minimum cost under different constraints. The objective function can be presented as follows:
M i n i m i z e f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) S u b j e c t t o : { g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 g 3 ( x ) = 1.93 x 4 3 x 2 x 6 4 x 3 1 0 g 4 ( x ) = 1.93 x 5 3 x 2 x 7 4 x 3 1 0 g 5 ( x ) = [ ( 745 ( x 4 / x 2 x 3 ) ) 2 + 16.9 × 1 0 6 ] 0.5 110 x 6 3 1 0 g 6 ( x ) = [ ( 745 ( x 5 / x 2 x 3 ) ) 2 + 157.5 × 1 0 6 ] 0.5 85 x 7 3 1 0 g 7 ( x ) = x 2 x 3 40 1 0 g 8 ( x ) = 5 x 2 x 1 1 0 g 9 ( x ) = x 1 12 x 2 1 0 g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5.0 x 7 5.5
To solve the tensile/compression spring problem, the population size, number of neighbors, and number of iterations determined via sensitivity analysis are equal to 50, 1, and 200 in the basic FDA literature. In this paper, all the algorithm parameters were selected based on the basic FDA literature, and the different results are shown in Table 8. In Table 8, NO means that the literature does give the value of the algorithm [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46].
The LSRFDA search’s highest value, lowest value, mean value, and standard deviation are higher than those of the FDA search result. Although the test results of the FDA are better than those of the LSRFDA, there is no one algorithm that can solve all engineering problems. Different algorithms have different advantages, and are fit for different object functions.

5.4. The Gear Train Problem

The aim of the gear train design problem is to minimize the cost of the gear ratio in the gear train. Figure 22 shows the structure of the gear train design problem. The decision variables of the problem are nA, nB, nD, and nF which are denoted as x1, x2, x3, and x4, respectively. A, B, D, and F mean centre points. In order to address the discrete variables, all the solutions are rounded to the nearest integer.
The objective function can be presented as follows:
Minimize           f ( x ) = ( ( 1 / 6.931 ) ( x 3 x 2 / x 1 x 4 ) ) 2 12 x i ( i = 1 , 2 , 3 , 4 ) 60
To solve the gear train design problem, the population size, number of neighbors, and number of iterations determined via sensitivity analysis are equal to 50, 1, and 200 in the basic FDA literature. In this paper, all the algorithm parameters were selected based on the basic FDA literature, and the different results are shown in Table 9. In Table 9, NO means that the literature does give the value of the algorithm [33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48].
The LSRFDA search’s highest value is the same as that of the FDA search result, but the other three values are higher than those of the basic FDA.

6. Discussion

The original FDA literature compared some intelligence algorithms with regard to their functions and engineering optimization problems. The research objective of this paper was to enhance the searching ability, iteration speed, and jumping out power of the optimal local solution in the basic FDA. To discuss this research objective, this paper first tests 16 different functions, and performs numerical calculation results analysis, algorithm sub-sequence calculation results analysis, Wilcoxon rank sum test results analysis, iteration results analysis, box plot results analysis, and searching path results analysis. Then, this paper computes engineering optimization problems, including the three-bar truss problem, the tensile/compression spring problem, the speed reducer design problem, and the gear train problem. To better discuss and analyze the algorithms’ performance, exploratory discussion radar charts are depicted in Figure 23 to show the ranking of algorithms for each function. If the point of the algorithm in the radar chart is close to the center of the circle, the algorithm has a higher search accuracy. If an algorithm forms a smaller polygon shape in the radar chart, the algorithm has better performance. It can be seen that the LSRFDA surrounds the radar chart center in functions of all dimensions. Additionally, it can be seen that the proposed algorithm ranks first among the other compared algorithms for all the test functions. Regarding the LSRFDA’s limitations and disadvantages, the LSRFDA’s computational complexity is higher than that of the original algorithm, because of the Lévy flight strategy. Because Lévy flight defines the scale-invariant walking model connecting the long gait with the small gait, some engineering optimization problems that require less computationally complex results may generate restrictions. In some search paths, the LSRFDA search path is larger than that of the FDA. In some engineering optimization problems, some testing results of other algorithms are better than those of the proposed method. However, there is no single algorithm that can solve all problems. The LSRFDA has its advantages and disadvantages.

7. Conclusions

In this paper, an LSRFDA algorithm was proposed to solve the optimization problem, which has strong exploitation ability. This proposed algorithm mixed the Lévy flight strategy with the self-renewable method, which can enhance the searching ability, iteration speed, and jumping out power of the optimal local solution of the basic FDA. The combination of the two strategies enables the LSRFDA to effectively follow the correct direction based on the given information, and can increase the robustness and the adaptability of the algorithm. It can generate a uniform distribution in the searching space in the form of a random distribution at the beginning of the algorithm. This paper focused on some difficulties encountered by the FDA in the iterative optimization process and applied the improved FDA to engineering optimization problems. We provide some new methods and ideas for use in the field of engineering optimization problems.
The mathematical testing function experiment results show that the proposed algorithm has better searching ability than the basic FDA algorithm in different benchmark functions, including low-dimensional functions and high-dimensional functions, which shows that the proposed algorithm can enhance the searching ability and iteration speed. Then, this paper selected four engineering optimization problems to further test the performance of the proposed algorithm. For the mathematical testing function experiment, we drew iterative figures, box plots, and search paths to show different performances of the LSRFDA, and the different results show that LSRFDA can jump out of the local optimal solution area and explore a larger solution area in the searching space. In general, the LSFDA has better searchability than the basic FDA algorithm. In the future, the LSRFDA will be used in practical industrial problems. Additionally, we will develop more LSRFDA functions. For future work, we will establish a fusion model based on the FDA and pattern recognition technology, and will deeply integrate the intelligent optimization technology of the hybrid FDA with pattern recognition technology; this could not only achieve the adaptive configuration of model parameters but could also use its superior global convergence performance to further improve the standard training and learning algorithms in pattern recognition models, enhancing their computational accuracy and convergence speed. Then, we will research methods of fault diagnosis based on the FDA, integrate new fault diagnosis methods, and carry out engineering application research based on these FDA fault diagnosis methods, especially for the engineering application of large and complex mechanical systems, expanding the scope for new applications of fault diagnosis.

Author Contributions

Conceptualization, Y.W.; formal analysis, Q.Z.; investigation, Y.W.; resources, Y.W.; writing—original draft preparation, Y.F., S.Z. and D.X.; writing—review and editing, Q.Z.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by basic research business fee projects of provincial undergraduate universities in Heilongjiang Province, grant number: 2022-KYYWF-0144, and the National Natural Science Foundation of China, grant number: 52175502.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marseglia, G.; Mesa, J.A.; Ortega, F.A.; Piedra-de-la-Cuadra, R. A heuristic for the deployment of collecting routes for urban recycle stations (eco-points). Socio-Econ. Plan. Sci. 2022, 82, 101222. [Google Scholar] [CrossRef]
  2. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm with Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  3. Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
  4. Halimu, Y.; Zhou, C.; You, Q.; Sun, J. A Quantum-Behaved Particle Swarm Optimization Algorithm on Riemannian Manifolds. Mathematics 2022, 10, 4168. [Google Scholar] [CrossRef]
  5. Hou, Y.; Zhang, Y.; Lu, J.; Hou, N.; Yang, D. Application of improved multi-strategy MPA-VMD in pipeline leakage detection. Syst. Sci. Control. Eng. 2023, 11, 2177771. [Google Scholar] [CrossRef]
  6. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  7. You, J.; Jia, H.; Wu, D.; Rao, H.; Wen, C.; Liu, Q.; Abualigah, L. Modified Artificial Gorilla Troop Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2023, 11, 1256. [Google Scholar] [CrossRef]
  8. Liang, X.; Cai, Z.; Wang, M.; Zhao, X.; Chen, H.; Li, C. Chaotic oppositional sine–cosine method for solving global optimization problems. Eng. Comput. 2020, 38, 1223–1239. [Google Scholar] [CrossRef]
  9. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H. An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl. Math. Model. 2014, 38, 2454–2462. [Google Scholar] [CrossRef]
  10. Goldanloo, M.J.; Gharehchopogh, F.S. A hybrid OBL-based firefly algorithm with symbiotic organisms search algorithm for solving continuous optimization problems. J. Supercomput. 2021, 78, 3998–4031. [Google Scholar] [CrossRef]
  11. Moosavi, S.H.S.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  12. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  13. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2015, 31, 1995–2014. [Google Scholar] [CrossRef]
  14. Das, B.; Mukherjee, V.; Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  15. Chou, J.-S.; Truong, D.-N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  16. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  17. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2018, 23, 715–734. [Google Scholar] [CrossRef]
  18. Zainel, Q.M.; Darwish, S.M.; Khorsheed, M.B. Employing Quantum Fruit Fly Optimization Algorithm for Solving Three-Dimensional Chaotic Equations. Mathematics 2022, 10, 4147. [Google Scholar] [CrossRef]
  19. Cheng, M.-Y.; Prayogo, D. Symbiotic Organisms Search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar] [CrossRef]
  20. Liu, Z.; Peng, Y. Study on Denoising Method of Vibration Signal Induced by Tunnel Portal Blasting Based on WOA-VMD Algorithm. Appl. Sci. 2023, 13, 3322. [Google Scholar] [CrossRef]
  21. Karami, H.; Anaraki, M.V.; Farzin, S.; Mirjalili, S. Flow Direction Algorithm (FDA): A Novel Optimization Approach for Solving Optimization Problems. Comput. Ind. Eng. 2021, 156, 107224. [Google Scholar] [CrossRef]
  22. Abualigah, L.; Almotairi, K.H.; Elaziz, M.A.; Shehab, M.; Altalhi, M. Enhanced Flow Direction Arithmetic Optimization Algorithm for mathematical optimization problems with applications of data clustering. Eng. Anal. Bound. Elem. 2022, 138, 13–29. [Google Scholar] [CrossRef]
  23. Jourdain, B.; Méléard, S.; Woyczynski, W.A. Lévy flights in evolutionary ecology. J. Math. Biol. 2012, 65, 677–707. [Google Scholar] [CrossRef] [PubMed]
  24. Pavlyukevich, I. Lévy flights, non-local search and simulated annealing. J. Comput. Phys. 2007, 226, 1830–1844. [Google Scholar] [CrossRef]
  25. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  26. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  27. Osman, I.H. Metastrategy simulated annealing and tabu search algorithms for the vehicle routing problem. Ann. Oper. Res 1993, 41, 421–451. [Google Scholar] [CrossRef]
  28. Ray, T.; Liew, K.M. Society and civilization: An optimization algorithm based on the simulation of social behavior. IEEE Trans. Evol. Comput. 2003, 7, 386–396. [Google Scholar] [CrossRef]
  29. Liu, H.; Cai, Z.; Wang, Y. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl. Soft. Comput. 2010, 10, 629–640. [Google Scholar] [CrossRef]
  30. Zhang, M.; Luo, W.; Wang, X. Differential evolution with dynamic stochastic selection for constrained optimization. Inf. Sci. 2008, 178, 3043–3074. [Google Scholar] [CrossRef]
  31. Wang, Y.; Cai, Z.; Zhou, Y.; Fan, Z. Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-handling technique. Struct. Multidiscip. Optim. 2008, 37, 395–413. [Google Scholar] [CrossRef]
  32. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [Google Scholar] [CrossRef]
  33. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft. Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  34. Coello Coello, C.A. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  35. Coello Coello, C.A.; Mezura Montes, E. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inf. 2002, 16, 193–203. [Google Scholar] [CrossRef]
  36. He, Q.; Wang, L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar] [CrossRef]
  37. Zahara, E.; Kao, Y.-T. Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems. Expert Syst. Appl. 2009, 36, 3880–3886. [Google Scholar] [CrossRef]
  38. Coelho, L.d.S. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar] [CrossRef]
  39. Wang, L.; Li, L.-p. An effective differential evolution with level comparison for constrained engineering design. Struct. Multidiscip. Optim. 2009, 41, 947–963. [Google Scholar] [CrossRef]
  40. Parsopoulos, K.E.; Vrahatis, M.N. Unified Particle Swarm Optimization for Solving Constrained Engineering Optimization Problems. In Proceedings of the Advances in Natural Computation, Changsha, China, 27–29 August 2005; pp. 582–591. Available online: https://link.springer.com/chapter/10.1007/11539902_71 (accessed on 29 August 2005).
  41. Huang, F.-z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  42. Mezura-Montes, E.; Coello, C.A.C. Useful Infeasible Solutions in Engineering Optimization with Evolutionary Algorithms. In Proceedings of the MICAI 2005: Advances in Artificial Intelligence, Monterrey, Mexico, 14–18 November 2005; pp. 652–662. Available online: https://link.springer.com/chapter/10.1007/11579427_66 (accessed on 18 November 2005).
  43. Akay, B.; Karaboga, D. Artificial bee colony algorithm for large-scale problems and engineering design optimization. J. Intell. Manuf. 2010, 23, 1001–1014. [Google Scholar] [CrossRef]
  44. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  45. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  46. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  47. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2011, 29, 17–35. [Google Scholar] [CrossRef]
  48. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
Figure 1. Lévy flight path. (a) Two dimensional Lévy flight path; (b) Three dimensional Lévy flight path.
Figure 1. Lévy flight path. (a) Two dimensional Lévy flight path; (b) Three dimensional Lévy flight path.
Mathematics 11 02217 g001
Figure 2. The flowchart for LSRFDA.
Figure 2. The flowchart for LSRFDA.
Mathematics 11 02217 g002
Figure 3. Sub-sequence run radar charts for low−dimensional functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10.
Figure 3. Sub-sequence run radar charts for low−dimensional functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10.
Mathematics 11 02217 g003
Figure 4. Sub-sequence run radar charts for variable–dimensional functions (D = 2). (a) f11(D=2); (b) f12(D=2); (c) f13(D=2); (d) f14(D=2); (e) f15(D=2); (f) f16(D=2).
Figure 4. Sub-sequence run radar charts for variable–dimensional functions (D = 2). (a) f11(D=2); (b) f12(D=2); (c) f13(D=2); (d) f14(D=2); (e) f15(D=2); (f) f16(D=2).
Mathematics 11 02217 g004
Figure 5. Sub-sequence run radar charts for variable–dimensional functions (D = 30). (a) f11(D=30); (b) f12(D=30); (c) f13(D=30); (d) f14(D=30); (e) f15(D=30); (f) f16(D=30).
Figure 5. Sub-sequence run radar charts for variable–dimensional functions (D = 30). (a) f11(D=30); (b) f12(D=30); (c) f13(D=30); (d) f14(D=30); (e) f15(D=30); (f) f16(D=30).
Mathematics 11 02217 g005
Figure 6. Sub-sequence run radar charts for variable–dimensional functions (D = 60). (a) f11(D=60); (b) f12(D=60); (c) f13(D=60); (d) f14(D=60); (e) f15(D=60); (f) f16(D=60).
Figure 6. Sub-sequence run radar charts for variable–dimensional functions (D = 60). (a) f11(D=60); (b) f12(D=60); (c) f13(D=60); (d) f14(D=60); (e) f15(D=60); (f) f16(D=60).
Mathematics 11 02217 g006
Figure 7. Sub-sequence run radar charts for variable–dimensional functions (D = 150). (a) f11(D=150); (b) f12(D=150); (c) f13(D=150); (d) f14(D=150); (e) f15(D=150); (f) f16(D=150).
Figure 7. Sub-sequence run radar charts for variable–dimensional functions (D = 150). (a) f11(D=150); (b) f12(D=150); (c) f13(D=150); (d) f14(D=150); (e) f15(D=150); (f) f16(D=150).
Mathematics 11 02217 g007
Figure 8. Convergence curves for low−dimensional functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10.
Figure 8. Convergence curves for low−dimensional functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10.
Mathematics 11 02217 g008aMathematics 11 02217 g008b
Figure 9. Convergence curves for variable–dimensional functions (D = 2). (a) f11(D=2); (b) f12(D=2); (c) f13(D=2); (d) f14(D=2); (e) f15(D=2); (f) f16(D=2).
Figure 9. Convergence curves for variable–dimensional functions (D = 2). (a) f11(D=2); (b) f12(D=2); (c) f13(D=2); (d) f14(D=2); (e) f15(D=2); (f) f16(D=2).
Mathematics 11 02217 g009aMathematics 11 02217 g009b
Figure 10. Convergence curves for variable–dimensional functions (D = 30). (a) f11(D=30); (b) f12(D=30); (c) f13(D=30); (d) f14(D=30); (e) f15(D=30); (f) f16(D=30).
Figure 10. Convergence curves for variable–dimensional functions (D = 30). (a) f11(D=30); (b) f12(D=30); (c) f13(D=30); (d) f14(D=30); (e) f15(D=30); (f) f16(D=30).
Mathematics 11 02217 g010
Figure 11. Convergence curves for variable–dimensional functions (D = 60). (a) f11(D=60); (b) f12(D=60); (c) f13(D=60); (d) f14(D=60); (e) f15(D=60); (f) f16(D=60).
Figure 11. Convergence curves for variable–dimensional functions (D = 60). (a) f11(D=60); (b) f12(D=60); (c) f13(D=60); (d) f14(D=60); (e) f15(D=60); (f) f16(D=60).
Mathematics 11 02217 g011aMathematics 11 02217 g011b
Figure 12. Convergence curves for variable–dimensional functions (D = 150). (a) f11(D=150); (b) f12(D=150); (c) f13(D=150); (d) f14(D=150); (e) f15(D=150); (f) f16(D=150).
Figure 12. Convergence curves for variable–dimensional functions (D = 150). (a) f11(D=150); (b) f12(D=150); (c) f13(D=150); (d) f14(D=150); (e) f15(D=150); (f) f16(D=150).
Mathematics 11 02217 g012
Figure 13. Box plot charts for low dimension functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10.
Figure 13. Box plot charts for low dimension functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10.
Mathematics 11 02217 g013
Figure 14. Box plot charts for variable–dimensional functions (D = 2). (a) f11(D=2); (b) f12(D=2); (c) f13(D=2); (d) f14(D=2); (e) f15(D=2); (f) f16(D=2).
Figure 14. Box plot charts for variable–dimensional functions (D = 2). (a) f11(D=2); (b) f12(D=2); (c) f13(D=2); (d) f14(D=2); (e) f15(D=2); (f) f16(D=2).
Mathematics 11 02217 g014
Figure 15. Box plot charts for variable–dimensional functions (D = 30). (a) f11(D=30); (b) f12(D=30); (c) f13(D=30); (d) f14(D=30); (e) f15(D=30); (f) f16(D=30).
Figure 15. Box plot charts for variable–dimensional functions (D = 30). (a) f11(D=30); (b) f12(D=30); (c) f13(D=30); (d) f14(D=30); (e) f15(D=30); (f) f16(D=30).
Mathematics 11 02217 g015
Figure 16. Box plot charts for variable–dimensional functions (D = 60). (a) f11(D=60); (b) f12(D=60); (c) f13(D=60); (d) f14(D=60); (e) f15(D=60); (f) f16(D=60).
Figure 16. Box plot charts for variable–dimensional functions (D = 60). (a) f11(D=60); (b) f12(D=60); (c) f13(D=60); (d) f14(D=60); (e) f15(D=60); (f) f16(D=60).
Mathematics 11 02217 g016
Figure 17. Box plot charts for variable–dimensional functions (D = 150). (a) f11(D=150); (b) f12(D=150); (c) f13(D=150); (d) f14(D=150); (e) f15(D=150); (f) f16(D=150).
Figure 17. Box plot charts for variable–dimensional functions (D = 150). (a) f11(D=150); (b) f12(D=150); (c) f13(D=150); (d) f14(D=150); (e) f15(D=150); (f) f16(D=150).
Mathematics 11 02217 g017
Figure 18. Three−dimensional graphs of benchmark functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10; (k) f11(D=2); (l) f12(D=2); (m) f13(D=2); (n) f14(D=2); (o) f15(D=2); (p) f16(D=2).
Figure 18. Three−dimensional graphs of benchmark functions. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10; (k) f11(D=2); (l) f12(D=2); (m) f13(D=2); (n) f14(D=2); (o) f15(D=2); (p) f16(D=2).
Mathematics 11 02217 g018aMathematics 11 02217 g018b
Figure 19. Algorithm search paths. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10; (k) f11(D=2); (l) f12(D=2); (m) f13(D=2); (n) f14(D=2); (o) f15(D=2); (p) f16(D=2).
Figure 19. Algorithm search paths. (a) f1.; (b) f2; (c) f3; (d) f4; (e) f5; (f) f6; (g) f7; (h) f8; (i) f9; (j) f10; (k) f11(D=2); (l) f12(D=2); (m) f13(D=2); (n) f14(D=2); (o) f15(D=2); (p) f16(D=2).
Mathematics 11 02217 g019aMathematics 11 02217 g019b
Figure 20. Schematic of the three-bar truss problem.
Figure 20. Schematic of the three-bar truss problem.
Mathematics 11 02217 g020
Figure 21. Schematic of the tensile/compression spring problem.
Figure 21. Schematic of the tensile/compression spring problem.
Mathematics 11 02217 g021
Figure 22. Schematic of the gear train problem.
Figure 22. Schematic of the gear train problem.
Mathematics 11 02217 g022
Figure 23. Algorithm ranking. (a) Algorithm ranking in two-dimensional functions; (b) Algorithm ranking in 30-dimensional functions; (c) Algorithm ranking in 60-dimensional functions; (d) Algorithm ranking in 150-dimensional functions.
Figure 23. Algorithm ranking. (a) Algorithm ranking in two-dimensional functions; (b) Algorithm ranking in 30-dimensional functions; (c) Algorithm ranking in 60-dimensional functions; (d) Algorithm ranking in 150-dimensional functions.
Mathematics 11 02217 g023aMathematics 11 02217 g023b
Table 1. Basic information on benchmark functions.
Table 1. Basic information on benchmark functions.
NameFunctionDRangefmin
Bealef1(x) = (1.5 − x1 + x1x2)2 + (2.25 − x1 + x1 x 2 2 )2
+ (2.625 − x1+ x1 x 2 3 )2
2[−100, 100]0
Boothf2(x) = (x1 + 2x2 − 7)2 + (2x1 + x2 − 5)22[−100, 100]0
Cubef3(x) = 100(x2 x 1 3 )2 + (1 − x1)22[−100, 100]0
Egg Cratef4(x) = x 1 2 + x 2 2 + 25(sin2(x1) + sin2(x2))2[−100, 100]0
Himmelblauf5(x) = ( x 1 2 + x2 − 11)2 + (x1 + x 2 2   7)22[−100, 100]0
Leonf6(x) = 100(x2 x 1 2 )2 + (1 − x1)22[−100, 100]0
Matyasf7(x) = 0.26( x 1 2 + x 2 2 ) −0.48x1x22[−100, 100]0
RotatedEllipse02f8(x) = x 1 2 x1x2 + x 2 2 2[−100, 100]0
Three-Hump Camelf9(x) = 2 x 1 2 − 1.05 x 1 4 + x 1 6 /6 + x1x2 +  x 2 2 2[−100, 100]0
Wayburn Seader01f10(x) = ( x 1 6  +  x 2 4 − 17)2 + (2x1 + x2 − 4)22[−100, 100]0
Griewank f 11 x = i = 1 D x i 2 / 4000 i = 1 D cos x i / i + 1 2/30/60/150[−10, 10]0
Rotated Hyper-Ellipsoid f 12 x = i = 1 D j = 1 i x j 2 2/30/60/150[−10, 10]0
Sphere f 13 x = i = 1 D x i 2 2/30/60/150[−10, 10]0
Sum Squares f 14 x = i = 1 D i x i 2 2/30/60/150[−10, 10]0
Sum of Different Powers f 15 x = i = 1 D x i i + 1 2/30/60/150[−10, 10]0
Zakharov f 16 x = i = 1 D x i 2 + i = 1 D 0.5 i x i 2 + i = 1 D 0.5 i x i 4 2/30/60/150[−10, 10]0
Table 2. Basic information on benchmark functions.
Table 2. Basic information on benchmark functions.
AlgorithmKey Feature ParameterMax_IterN
LSRFDALévy flight length: Sl. Power law index: β.20050
FDARandom vector: r a n d ¯ . Random position: Xrand. Random number: rand.
MFORandom number: t. Constant: b.
MVORandom number: r1, r2, r3, r4. Coefficient number: TDR, WEP. Exploitation accuracy: p. Minimum and maximum numbers: min, max.
SAInitial temperature: t0. Attenuation factor: k.
Table 3. Comparison of results for two-dimensional functions.
Table 3. Comparison of results for two-dimensional functions.
FunctionMetricLSRFDAFDAMFOMVOSA
f1Min0002.6946 × 10−50.0096
Max07.3704 × 10−71.4092 × 10−160.47920.4769
Ave07.3704 × 10−81.4092 × 10−170.12940.3301
f2Min0002.2357 × 10−60.0017
Max0000.00050.0174
Ave0000.00020.0069
f3Min009.5342 × 10−50.00030.0132
Max1.1093 × 10−311.6575 × 10−160.36628.48514.4566
Ave1.6024 × 10−321.6575 × 10−170.05381.81171.3225
f4Min01.4725 × 10−436.9502 × 10−451.8314 × 10−50.3767
Max01.9320 × 10−344.0349 × 10−380.006685.3771
Ave03.6561 × 10−354.0399 × 10−390.002333.2411
f5Min0000.00050.0108
Max3.1554 × 10−308.8920 × 10−269.3983 × 10−260.00390.3501
Ave7.0998 × 10−318.8922 × 10−271.3632 × 10−260.00210.0934
f6Min02.7794 × 10−451.4127 × 10−90.00060.0185
Max01.3670 × 10−422.600538.61670.3772
Ave 02.6631 × 10−430.26077.78960.1339
f7Min01.2152 × 10−471.9600 × 10−394.9588 × 10−72.4612 × 10−5
Max01.7254 × 10−401.8402 × 10−183.2146 × 10−50.0011
Ave01.7603 × 10−412.3675 × 10−191.2289 × 10−50.0005
f8Min08.3261 × 10−454.0001 × 10−477.4928 × 10−60.0002
Max06.6272 × 10−372.2733 × 10−390.00010.0054
Ave06.8364 × 10−382.3244 × 10−406.6311 × 10−50.002970385
f9Min07.3650 × 10−464.5065 × 10−475.4736 × 10−50.0004
Max01.6197 × 10−384.4818 × 10−420.00030.3006
Ave01.9097 × 10−398.7414 × 10−430.00010.0416
f10Min0000.00090.0408
Max1.3411 × 10−297.8886 × 10−312.6225 × 10−210.02024.5708
Ave1.9722 × 10−303.1554 × 10−313.5098 × 10−220.00860.7425
f11(D=2)Min0007.2263 × 10−70.0006
Max03.6366 × 10−110.00990.02960.0050
Ave05.5746 × 10−120.00640.01500.0019
f12(D=2)Min02.4682 × 10−412.1456 × 10−498.3964 × 10−84.3981 × 10−5
Max06.8949 × 10−353.3578 × 10−443.8819 × 10−60.0011
Ave06.9221 × 10−363.4030 × 10−451.3765 × 10−60.0005
f13(D=2)Min01.3668 × 10−392.1177 × 10−492.0121 × 10−72.5717 × 10−5
Max04.4792 × 10−361.2284 × 10−442.0760 × 10−60.0005
Ave09.0174 × 10−372.8524 × 10−451.3137 × 10−60.0002
f14(D=2)Min03.3512 × 10−414.8850 × 10−534.4806 × 10−82.2747 × 10−7
Max03.6176 × 10−352.8707 × 10−454.4295 × 10−60.0008
Ave06.0365 × 10−363.0331 × 10−461.1654 × 10−60.0003
f15(D=2)Min09.3840 × 10−614.2891 × 10−587.1028 × 10−101.4138 × 10−6
Max02.2466 × 10−491.2316 × 10−521.4890 × 10−70.0001
Ave02.2469 × 10−501.3073 × 10−532.7849 × 10−82.8356 × 10−5
f16(D=2)Min001.5809 × 10−491.3379 × 10−81.5844 × 10−5
Max01.0366 × 10−298.0164 × 10−454.5850 × 10−60.0004
Ave01.0366 × 10−301.9613 × 10−451.4956 × 10−60.0002
Table 4. Comparison of results for 30-, 60-, and 150-dimensional functions.
Table 4. Comparison of results for 30-, 60-, and 150-dimensional functions.
FunctionMetricLSRFDAFDAMFOMVOSA
f11(D=30)Min00.00900.15780.00311.0736
Max00.04070.58120.02831.1207
Ave00.02150.38670.01421.1042
f12(D=30)Min00.066494.40890.97473.3371 × 102
Max00.53133.2261 × 1036.07456.1136 × 102
Ave00.20517.9935 × 1022.45444.8891 × 102
f13(D=30)Min00.00906.241250.032323.3850
Max00.06171.1021 × 1020.065345.6998
Ave00.032137.28800.045935.8305
f14(D=30)Min00.022588.52321.49323.2128 × 102
Max00.27467.0043 × 10221.23445.6142 × 102
Ave00.15013.5094 × 1025.57794.7041 × 102
f15(D=30)Min00.00021.6312 × 10108.3515 × 10−65.3366 × 105
Max00.01431.0000 × 10190.00041.0801 × 109
Ave00.00301.0021 × 10189.7982 × 10−52.2005 × 108
f16(D=30)Min01.97564.2004 × 1022.01364.3655 × 102
Max021.79681.0012 × 1037.48795.8471 × 102
Ave06.31286.0961 × 1025.14885.2551 × 102
f11(D=60)Min00.10521.05750.04651.2566
Max00.35251.13690.10601.3302
Ave00.23631.09200.07981.2894
f12(D=60)Min014.27374.3071 × 10345.41716.3446 × 103
Max034.16111.2711 × 1041.4704 × 1028.0107 × 103
Ave024.78417.6325 × 1031.0367 × 1027.2494 × 103
f13(D=60)Min00.84421.7026 × 1020.61102.3061 × 102
Max02.45534.5365 × 1020.94253.4922 × 102
Ave01.46753.5684 × 1020.73462.9886 × 102
f14(D=60)Min017.50996.6668 × 10363.15084.9899 × 103
Max043.01301.3593 × 1041.5904 × 1027.8820 × 103
Ave030.87449.2256 × 1031.1490 × 1026.5872 × 103
f15(D=60)Min01.7966 × 1021.1223 × 10317.3723 × 1073.4568 × 1024
Max09.7602 × 1081.0011 × 10451.1733 × 10121.5364 × 1028
Ave09.7971 × 1071.1012 × 10442.5344 × 10112.4717 × 1027
f16(D=60)Min043.91511.4475 × 1034.5691 × 1021.3422 × 103
Max01.4094 × 1022.5997 × 1037.3221 × 1021.5686 × 103
Ave075.89512.0931 × 1036.1012 × 1021.4595 × 103
f11(D=150)Min00.57451.62830.47381.8267
Max00.80671.68620.56511.9380
Ave00.66261.65650.52661.8827
f12(D=150)Min01.1559 × 1031.3637 × 10545.91851.1103 × 105
Max02.8335 × 1031.9966 × 1051.8621 × 1021.3772 × 105
Ave01.9815 × 1031.7309 × 10597.32721.2739 × 106
f13(D=150)Min026.10262.3455 × 10333.64261.8881 × 103
Max066.31792.9093 × 10340.19032.1316 × 103
Ave039.63192.6249 × 10536.46992.0106 × 103
f14(D=150)Min01.2931 × 1031.4680 × 1054.4758 × 1031.1027 × 105
Max02.4220 × 1032.0119 × 1056.8221 × 1031.3683 × 105
Ave01.7784 × 1031.7194 × 1055.3490 × 1031.2501 × 105
f15(D=150)Min09.7063 × 10281.0102 × 101111.5854 × 10654.0170 × 1085
Max01.1564 × 10451.0000 × 101262.1031 × 10762.1882 × 1094
Ave01.1564 × 10441.2000 × 101252.2457 × 10752.9493 × 1093
f16(D=150)Min03.5503 × 1023.9665 × 1033.0960 × 1034.1307 × 103
Max08.3240 × 1027.3505 × 1034.7742 × 1034.5396 × 103
Ave06.3080 × 1025.8527 × 1033.9971 × 1034.3493 × 103
Table 5. Comparison of the Wilcoxon rank sum test results.
Table 5. Comparison of the Wilcoxon rank sum test results.
FunctionFDAMFOMVOSA
f10.00020.00226.39 × 10−56.39 × 10−5
f2NONO6.39 × 10−56.39 × 10−5
f30.28380.00010.00010.0001
f46.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f50.28850.04160.00020.0002
f66.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f76.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f86.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f96.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f100.25770.04470.00020.0002
f11(D=2)0.07790.00076.39 × 10−56.39 × 10−5
f12(D=2)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f13(D=2)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f14(D=2)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f15(D=2)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f16(D=2)0.36816.39 × 10−56.39 × 10−56.39 × 10−5
f11(D=30)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f12(D=30)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f13(D=30)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f14(D=30)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f15(D=30)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f16(D=30)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f11(D=60)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f12(D=60)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f13(D=60)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f14(D=60)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f15(D=60)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f16(D=60)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f11(D=150)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f12(D=150)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f13(D=150)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f14(D=150)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f15(D=150)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
f16(D=150)6.39 × 10−56.39 × 10−56.39 × 10−56.39 × 10−5
Table 6. Results of three-bar truss problem.
Table 6. Results of three-bar truss problem.
AlgorithmHighestLowestMeanStd
SC263.895846263.969756263.9033561.3 × 10−2
PSO–DE263.895843263.895843263.8958434.5 × 10−10
DSS-MDE263.895843263.895849263.8958439.7 × 10−7
HEA-ACT263.895843263.896099263.8958654.9 × 10−5
WCA263.895843263.896201263.8959038.71 × 10−5
MBA263.895852263.915983263.8979963.93 × 10−3
FDA263.895843263.906102263.8964160.019
LSRFDA263.89584341263.89588457263.895855791.521133 × 10−5
Table 7. Results of the tensile/compression spring problem.
Table 7. Results of the tensile/compression spring problem.
AlgorithmHighestLowestMeanStd
GA3 0.01270480.01282200.01276903.94 × 10−5
GA40.01268100.01297300.01274205.90 × 10−5
HPSO0.01266520.01271900.01270721.58 × 10−5
NM-PSO0.01263020.01263300.01263148.47 × 10−7
G-QPSO 0.0126650.0177590.0135240.001268
QPSO 0.0126690.0181270.0138540.001341
PSO 0.0128570.0718020.0195550.011662
DELC 0.0126652330.0126655750.0126652671.3 × 10−7
DSS-MDE0.0126652330.0127382620.0126693661.3 × 10−5
HEA-ACT 0.0126652330.0126652400.0126652341.4 × 10−9
PSO-DE0.0126652330.0126653040.0126652441.2 × 10−8
SC 0.0126692490.0167172720.0129226695.9 × 10−4
UPSO 0.013120.0503651 0.022947.20 × 10−3
CDE0.01267NO0.012703NO
(λ + µ)-ES 0.012689NO0.0131653.9 × 10−4
ABC0.012665NO0.0127090.012813
TLBO0.012665NO0.01266576NO
MBA0.0126650.0129000.0127136.30 × 10−5
WCA0.0126650.0126650.0126658.06 × 10−5
CSA 0.01266523280.01267018160.01266599841.357079 × 10−6
FDA0.01266527610.01777708450.01278959142.0881 × 10−4
LSRFDA0.0126653514610.0135888743520.0128342817132.873818 × 10−4
Table 8. Results of the speed reducer design problem.
Table 8. Results of the speed reducer design problem.
AlgorithmHighestLowestMeanStd
SC2994.7442413009.9647363001.7582644.0000
PSO-DE 2996.3481672996.3482042996.3481746.4 × 10−6
DELC 2994.4710662994.4710662994.4710661.9 × 10−12
DSS-MDE 2994.4710662994.4710662994.4710663.6 × 10−12
HEA-ACT 2994.4991072994.7523112994.6133687.0 × 10−2
(λ + µ)-ES2996.348NO2996.3480
ABC 2997.058NO2997.0580
TLBO 2996.34817NO2996.348170
MBA2994.4824532999.6524442996.7690191.56
MRFO2994.48002994.52482994.49280.0146
FDA2749.58302749.58302749.58305.6753 × 10−6
LSRFDA2996.051399423014.179404403005.209356245.821036
Table 9. Results of the gear train design problem.
Table 9. Results of the gear train design problem.
AlgorithmHighestLowestMeanStd
UPSO 2.700857 × 10−12NO3.80562 × 10−81.09 × 10−7
ABC2.700857 × 10−12NO3.641339 × 10−105.52 × 10−10
MBA2.700857 × 10−122.062904 × 10−82.471635 × 10−93.94 × 10−9
CSA2.70 × 10−123.18 × 10−82.06 × 10−95.06 × 10−9
CS2.7009 × 10−122.3576 × 10−91.9841 × 10−93.5546 × 10−9
ALO2.7009 × 10−12NONONO
FDA2.700857 × 10−123.2999 × 10−97.5614 × 10−108.0465 × 10−10
LSRFDA2.70085715 × 10−126.19334585 × 10−91.20815960 × 10−92.052122 × 10−9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, Y.; Zhang, S.; Wang, Y.; Xu, D.; Zhang, Q. An Improved Flow Direction Algorithm for Engineering Optimization Problems. Mathematics 2023, 11, 2217. https://doi.org/10.3390/math11092217

AMA Style

Fan Y, Zhang S, Wang Y, Xu D, Zhang Q. An Improved Flow Direction Algorithm for Engineering Optimization Problems. Mathematics. 2023; 11(9):2217. https://doi.org/10.3390/math11092217

Chicago/Turabian Style

Fan, Yuqi, Sheng Zhang, Yaping Wang, Di Xu, and Qisong Zhang. 2023. "An Improved Flow Direction Algorithm for Engineering Optimization Problems" Mathematics 11, no. 9: 2217. https://doi.org/10.3390/math11092217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop