1. Introduction
Within the concept of a “smart” digital environment, methods of mathematical modeling and machine learning are actively used to design and implement digital twins of complex technical, technological, and organizational systems. In this case, it is usually necessary to solve complex global optimization problems to automate the selection of effective structures and parameters of the corresponding models of these digital twins. The effectiveness of global optimization methods depends significantly on the choice of the initial set of solutions, which are subsequently used to find the global optimum or a good local optimum that approximates the global one. This is especially important when using global optimization methods for the continuously differentiable functions of real variables, because in this case, it is possible to obtain optimal solutions guaranteed by the strict mathematical apparatus of applied mathematics.
Let a differentiable function
and a convex compact set
with a nonempty interior,
, be given. The problem considered in this paper consists in finding a good local minimum using the multistart strategy. In order achieve this, it is necessary to allocate
p starting points
in
X, such that they cover
X “more or less uniformly”. The proposed multistart strategy is based on the CONOPT solver [
1].
Various uniform sampling procedures can be used for this purpose. A survey of special methods for allocation points on spheres is presented in [
2]. If
X is a polytope, sampling based on simplicial decomposition of
X is applied, as given in [
3]. In [
4], a class of Markov chain Monte Carlo (MCMC) algorithms for distribution points on polytopes is described. In a more general case, when
X is a convex body, a random walk strategy [
5] based on the MCMC technique is successfully applied. A brief review of different kinds of random walk can be found in [
4]. However, uniform random sampling algorithms are of exponential complexity [
6]. Uniform sampling is usually used for the approximate calculation of an integral or volume of
X. We are interested in finding a good local solution in global optimization problems. The most attractive feature of uniform sampling consists in the following: a global minimum solution can be found with a probability of one as the length of the sampling tends to infinity. However, due to the specifics of high-dimensional spaces [
7], random sampling is not efficient from a practical point of view. Nevertheless, uniform sampling continues to draw attention, and investigations on this topic are of serious interest [
8]. Approaches based on the
p-location problem [
9] and
p-center methodology [
10] can also be used for solving the problems considered in our paper. However, we aimed to check the efficiency of a global optimization approach.
In our paper, we propose a procedure for the good allocation of points on a convex compact set
X. The idea is to use a special global optimization problem as an auxiliary one for allocation. The special global optimization problem consists in maximizing the Euclidean norm plus a linear term over a convex compact set. Because of the particular form of the problem, it can be solved to global optimality for a sufficiently large number of variables, for example, for
. In doing so, we achieve a better covering of set
X by a family of points. We believe that a combination of the proposed approach and advanced metaheuristics [
11] will be of serious practical importance.
The first approach. The most attractive statement of the problem can be formalized as follows:
Problem (
1) means that it is necessary to allocate
p points such that the distance between any two points is the same and is as maximal as possible. In this case, the set
is called the set of equidistant points. However, it is well known that Problem (
1) is solvable only if
. When
, then points
are vertices of a regular simplex. If
, all points
belong to the sphere of radius
centered at
However, in many applications, it is necessary to allocate more than points.
The second approach. We move to another problem of the following form:
We want to allocate
p points such that the minimum distance between any two of them is as maximal as possible. Problem (
3) always has a solution since the objective function is continuous and the feasible set is nonempty and compact. The objective function is nonsmooth, but this can be avoided by the standard reduction of Problem (
3) to the following one:
Two main difficulties are unavoidable when solving Problem (
4). Firstly, the number of variables is equal to
. Secondly, the feasible domain is nonconvex. Hence, we have to overcome the nonconvexity of the feasible domain, but we are seriously restricted in dimension
n.
The third approach. Given
points
, find point
as a solution to the problem
As a result, set
X is covered by
p balls centered at
with radius
equal to
. We start from an arbitrary point
and sequentially determine points
and functions
according to (
5). Let
be identical a zero function on
X. The theoretical foundation of the approach based on solving Problem (
5) is given by the following theorem.
Theorem 1. The sequence of functions uniformly converges to function θ over X.
Proof. Functions
are Lipschitz functions with the same Lipschitz constant. Therefore,
is an equicontinuous sequence of functions. Since
X is a compact set, then
, where
is the diameter of
X, and functions
are uniformly bounded. By construction
. Hence, due to the Arzelà–Ascoli theorem,
is a sequence of functions uniformly convergent to a continuous function
. By construction
; hence,
Assume that
. Let
be a subsequence convergent to a point
such that
. From (
6), due to the continuity of
, we have
, a contradiction, which proves the theorem. □
Hence, we can theoretically achieve the covering of X by a number of balls with sufficiently small radius. In practice, especially in high dimensions we restrict ourselves to a reasonable value of p.
Let us rewrite Problem (
5) in a more computationally tractable form. Point
is the maximum distant point from points
. Since
and
, we can rewrite Problem (
5) in the form
The feasible domain in (
7) is convex, and the objective function is convex. Therefore, we have a convex maximization problem, and special advanced methods [
12] can be used for solving (
7).
In our paper, we develop the iterative scheme of the third approach based on solving problems of type (
7). The description is the following. Take an arbitrary first point
. The other points are determined according to the solutions to problem (
7) for
. Points are found sequentially: the new point is determined after finding the previous ones. This is why we call points
obtained on the base of the iterative solution of problem (
7) sequentially maximum distant pointsor simply sequentially distant points.
Notation:
are unit vectors with 1 on the j-th position and 0 on the others;
is the j-th component of vector ;
is the i-th vector in a sequence of n-dimensional vectors ;
is the dot (inner) product of vectors .
2. Allocation of Points in the Unit Ball
Assume that
X is the unit ball, that is,
In this case, Problem (
5) can be solved analytically. The obtained points are called ball sequentially distant points. We start with the problem of setting the
equidistant point in
B that is equivalent to inscribing a regular simplex in
B. The distance between points can be determined from (
2) with
,
Since the points are equidistant:
Due to the symmetry of
B, we can set
. Then, from (
2),
Since points
belong to the intersection of a plane orthogonal to
and a boundary of
B, we also can choose the point
as a point with maximal zero components. Therefore, we set
. The distance
, and
. From these two equations and (
9), we obtain
. Now, let us repeat the same consideration for the
-dimensional ball centered at
and obtained as an intersection of the plane
and
B. Then, we determine
. After repeating this consideration similarly for the remaining cases, we obtain the final description of the equidistant point in the unit ball:
Let us switch now to the construction of the sequentially maximum distant points. Again, due to the symmetry of
B, the starting point
. The next point, which is denoted by
, is determined as
. Point
is a solution to the problem
Let us introduce the sets
Then, solving Problem (
11) is reduced to solving the following two problems:
and
Since
, the upper bound for the maximum value in (
12) is given by
and is achieved, for example, at point
. The value
. Therefore,
is a solution to problem (
12). Similarly,
, the upper bound
is also achieved at
and
. Hence, point
is a solution to problem (
13). The latter means that
is a solution to Problem (
11), and we can set
.
Consider now the problem
Determine sets
Problem (
14) is reduced to find solutions to the three auxiliary problems
Again,
and
. In both cases, the maximum value 2 is attained at the point
. For the last auxiliary problem, we have
, that is, the corresponding maximum value cannot be greater than 2. Therefore, point
is a solution to Problem (
14).
So far, four points
are obtained. We are going to prove by induction that the same principle is true for
points:
. The basis of induction: the hypothesis is true for
. The induction step: let us prove that the hypothesis is true for the case
. Consider the problem
Define for
the following sets
Then, Problem (
15) disintegrates into
problems
As above,
,
and
. Similarly,
. Therefore, we can take
as a solution to Problem (
15) and set
.
Let us consider now the next problem:
Using the same arguments as earlier, it is easy now to see that
and
. Hence, we can accept
as a solution to (
18) and set
.
Therefore, the first
points are determined as
The maximum distance between any two points in (
19) is equal to 2, and the minimum distance between any two points is equal to
.
Let us now determine point
. In order to do this, we have to solve the problem
Rewrite
f as follows:
The maximal value of the expression in (
21) over
B is obviously equal to 1 and is achieved at the origin
. From (
19) and (
20), we have
; hence,
. The maximum distance between any two points in the set
is equal to
, and the minimum distance is equal to 1.
The solution to the problem
is given by the point
, since
and
. Due to the symmetricity of
B, the next
points are other vertices of cube
.
Finally, sequentially distant
points for the unit ball are given by
The maximum distance between any two points is obviously equal to 1. Due to the symmetricity of the ball, the minimum distance can be determined as the distance between
and any point
. For example,
. Points in (
22) and (
23) are calculated without solving the corresponding optimization problems.
The above procedures can be generalized for the allocation of points in a general ball .
Case A. Generalization of the equidistant points. We add the center
to the set of points and obtain the following
ball sequentially distant points
with (
10)
The obtained points are not equidistant. The maximum distance between any two points is equal to
R, and the minimum distance is equal to
(see (
8)).
Case B. Ball sequentially distant points. These points are just a direct generalization of (
22),
The maximum distance is equal to
R, and the minimum distance is equal to
.
Case C. Ball sequentially distant points. Introduce cube
. Then, the points are determined as follows:
The maximum distance between any two points is equal to
R, and the minimum distance is equal to
.
Let us compare the allocation of a ball sequentially
from (
26) without the center
and with a uniform distribution over a unit sphere. We take the minimum distance between two points as a measure of allocation efficiency: the greater minimum distance, the better the allocation. The uniform distribution over the unit sphere is obtained using normal distribution with mean 0 and standard deviation 1 by normalization. The minimum distance between two ball sequentially distant points is
for any
n. If we uniformly distribute 200 points over the unit sphere in a 100-dimensional case, then the minimum distance is on average 1.098 (after 10 repetitions). Therefore, the ball sequentially distant points allocation is almost 40% better than the uniform allocation.
3. Mapping the Ball Sequentially Distant Points on a Compact Convex Set
Let
X be a convex compact set defined by a system of inequalities
are convex and twice continuously differentiable functions, and
. We use the concept of an analytical center
[
13]. The point
is the solution to the convex optimization problem
, and
F is a twice continuously differentiable concave function. Since
, we have
, so the following ellipsoid can be defined:
Then,
. The Hessian
H can be represented as
,
U is an
orthonormal matrix with eigenvectors of
H as columns, and
is an
diagonal matrix with eigenvalues
on the main diagonal. Let us introduce new variables
. Then, in variables
y, ellipsoid
E in (
30) is the unit ball
. Let
be ball sequentially distant points in
y-space constructed in correspondence to the cases A (
), B (
) or C (
) from the previous section. In the
x-space, we define points
Images
of the ball equidistant points (
) are solutions to the problem
Images
of the ball sequentially distant points (cases D or C,
,
or
) are solutions to the problem
Example 1. Consider the following problem:f is the shifted drop-wave function [14], and global minimum . After solving the corresponding problem (
29)
, we determine the analytical center and matrices We use Case C from the previous section, so
for
. Points
are determined in (
27) and (
28) with
, points
, points
are stationary points determined by the CONOPT solver [
1] starting from points
, and
are the corresponding objective function values (see
Table 1).
We can see from
Table 1 that the global minimum point was determined three times. In the other six cases, different stationary points were found with two points
and
with the same value −0.656, and two points
and
with the value −0.885.
Geometrical interpretation of points
and the ellipsoid as a dashed curve are given in
Figure 1.
The advantage of the proposed approach consists in the following: well-allocated points in “narrow and arbitrary oriented” convex compact sets can be determined since the ellipsoid (
30) provides a good inner approximation of
X.
Example 2. We extend the proposed approach to solve the following problem [15]:Set X is determined by the following system: Points
were determined according to Case B (
26). Points
were computed by (
31), and
is the analytical center of
X. Since the objective function is nonconvex and quadratic, the global minimum is achieved on the boundary of
X. Points
were obtained as intersections of rays
with the boundary of
X. Then, the multistart procedure started from points
was applied, and the global minimum
was found.
4. Allocation of an Arbitrary Given Number of Points
In the previous section, the number of allocated points was equal to or or . The allocation procedure was based on setting the points in a ball. In this section, we assume that the number of allocated points is p, which is different from the previous values, and, more importantly, the allocation procedure is not connected to the ball. The price for such an approach is a sequential solution to a special global optimization problem.
Problem (
7) is to be iteratively solved as was announced in
Section 1. This problem is a problem of the global maximization of a convex quadratic function over a bounded polyhedral set. Hence, special methods can be used for the solution.
Let the number
p of allocated points be given. The first point
can be chosen arbitrarily. The remaining points are found by solving the global optimization problem
In solving the examples below, we used the solver SCIP [
16] for finding the global maximum in Problem (
32).
Example 3. The number of allocated points , set . Since the feasible set is polytope, it was decided to start from the vertex . In Figure 2, a geometrical interpretation of the allocated points is given. In
Table 2, the coordinates of vectors
are given, and
is the squared maximum distance from the current point to the previous ones.
Example 4. The number of allocated points , set . The starting vertex . The determined vertices are shown in Figure 3. Table 3 contains the coordinates of
and again the squared maximum distances (
) from the current point to the previously found ones.
Example 3 shows that vertices of the given polytope are not necessarily covered by points . The vertex is not covered.
In practice, it is enough to find a new point, which is sufficiently far from the previous points. Hence, a good local solver can be used for finding the solution to Problem (
32). In the testing below, we used the IPOPT solver [
17] for this purpose. In the testing problems, the feasible set
X was a bounded polyhedral set
with
matrix
A. Vectors
were determined randomly in a such a way that
. The first two points
and
are approximate solutions to the problem
where
. For solving Problem (
33), the SCIP solver was used with the solution time limitation increased by 30 s. The number of points was equal to 100. The solution to the corresponding problems (
32) for
were obtained by the IPOPT solver. The last point,
, was obtained by the SCIP solver with the time limitation increased to 300 s. In
Table 4,
n is the number of variables,
m is the number of rows in matrix
A,
,
is the obtained maximum distance from the last point
to the previous ones, and T is the solving time in seconds. Testing was performed on IntelCore i7-3610QM (2.3 Ghz, 8 GB DDR3 memory).
In problems with five and ten variables, globally optimal solutions were found. In other words, for example, when , the diameter of X was equal to 1931.523, and the exact maximum distance from the 99 previous points to the point was equal to 608.201. In higher-dimensional problems, approximate solutions were determined.
5. Two Kinds of Multistart Strategy
We know that the feasible set
X can be covered by
p balls with centers at
and with radius
(see Problem (
5)). Consider the
p optimization problem
where
Let
,
be points obtained as a result of the application of the CONOPT solver to Problem (
34) using
as the starting points. Compare Problem (
34) with the following one:
Let
,
be solutions of (
35) obtained also by the CONOPT solver applied
p times also from points
as the starting points. Points
have a “local nature” because of constraints
. Therefore, we can make the following assumption: the set
contains more different local minima than the set
. It is not difficult to construct an example, in which all points
as well as points
are points of different local minima. The first multistart strategy is connected to the construction of the sets
. The second multistart strategy is connected to the construction of the sets
. However, in practice there can be a significant difference between these sets of points for particular cases. Let us consider the following examples.
Example 5. Consider the Bird problem:This problem has many local minima and two global minimum points, and with . For , sets and do not contain global minimum points. When , the set contains five different local minima, and one of them is a global minimum. The set contains four different local minima, and one of them is a global minimum. In total, the set contains six different points of minimum, and one of them is a global minimum. The set contains five local minima and two of them are global minima. The set contains the same of local minima as . In total, the set contains seven different local minima, and two of them are global minima. Example 6. Consider the Branin problem:The global minimum is unique, , . When , the sets and do not contain the global minimum point. The set contains nine different local minima, and one of them is the global minimum. The set also contains nine different local minima, and one of them is the global minimum. Sets and do not coincide, and their union contains 10 different local minima, and one of them is the global minimum. Example 7. Consider the egg crate problem:The global minimum is unique, , . The set contains five different local minima, and one of them is the global minimum. When , the sets do not contain the global minimum. As for the sets , they contain the global minimum for . The set contains twenty-five different local minima, and one of them is the global minimum. In comparison, the set contains eighteen different local minima, and one of them is the global minimum. Example 8. Consider the Mishra problem:The problem has the unique global minimum , . The sets contain the global minimum for , while the set contains five different local minima, and one of them is global. The sets do not contain the global minima for . The corresponding radius of each covering ball for is equal to . Hence, the Mishra problem has very many “narrow” points of local minima. Example 9. Consider the Price problem:The global minimum is unique, , . The sets contains the global minimum for . The set contains the global minimum for . Example 10. Consider the Shubert problem:There are many global minima, one of them being , . The sets start to contain a global minimum from . The set contains only two different local minima, and one of them is global. The sets contain a global minimum when , and all twenty-nine local minima of the set are different. Example 11. Consider the Trefethen problem: The global minimum is unique, , . This problem has very many local minima. For example, the set consists of thirty different local minima, with no global minimum among them. The set contains twenty-eight new different local minima in addition to the set , again with no global minimum among them. Therefore, the union contains the fifty-eight different local minima and no global minimum. Only for , the sets contain the global minimum. The set contains the five hundred seventy different local minima and no global minimum. Each radius of the five hundred seventy balls, which cover the feasible set, is equal to .
In all considered examples, the following properties should be mentioned. As a rule, the sets need fewer points to detect a global minimum. Example 8 with the Mishra function provides a very remarkable confirmation of this assumption: only six points were used in the set to cover the global minimum, whereas even six hundred points were not enough to detect the global minimum in the case of the set . The price for such a behaviour is that many points in the sets are found several times, in contrast to the sets . We also have to keep in mind that in example 9 with the Price function, the situation is opposite: thirteen points to detect the global minimum for the set and twenty-six points to detect the global minimum for the set . The number of different local minimum points in the sets is usually larger than in the sets . Nevertheless, local minimum points in the sets being smaller in number, usually (not always) have lower objective function values.
Let us compare the sets
and
for all tested problems and for the same number of points
, that is, we compare the sets
and
. The results of the comparison are given in
Table 5. Column
shows the number
of the different local minima in the corresponding sets
, with
being the number of global minima among them. Similarly, column
shows the number
of different local minima in the sets
with the number
of global minima among them. Column New
shows the number of global minima in the sets
(new global minima). Column New
shows the number of local minima in the sets
(new local minima). Column
shows total number
of different local minima and total number
of different global minima obtained by determining both sets
and
. For example, for the Shubert problem, we have 13(3) in the column
, which means that the corresponding set
contains thirteen different local minimum points and three of them are global minimum points. In the column
, we have 18(3) that means that the corresponding set
contains eighteen different local minima and three of them are global minimum points. Column New
shows that one new global minimum point is contained in the set
in comparison to the set
, and column New
shows that the set
contains fourteen new local minimum points in comparison to the set
. Finally, in column
, we have 27(4), which means that twenty-seven different local minimum points were determined, and four of them are global minimum points.
Assuming the differentiability of the objective function and finiteness of the set of local minima, it is not possible to assess the number of local minima. Therefore, we propose the following approach. Assess the number p of local minima from some additional practical considerations. Then, construct the set containing a good local minimum point or even a global minimum point. After that, construct the set to enlarge the number of local minima to catch situations similar to the Price function. Due to the very high efficiency of the CONOPT solver, finding the sets and is not too computationally demanding. We can obtain a practical assessment of the number of minima of the objective function by using such a mixture of these two kinds of the multistart strategy. If the number of total determined local minima is not very large (for example, many of them are found many times), then we can conclude that we performed a good exploration of the objective function. Otherwise, we can reach the conclusion that the objective function is of a very complicated structure.
6. Testing Sequentially Distant Points in Optimization Problems
We present the results of testing the comparative efficiency of using sequentially distant and randomly generated points in solving optimization problems. Three strategies, A, B, and C, based on the cases from
Section 1, are tested. Optimization problems are problems of minimizing highly nonlinear functions over a box or parallelepiped. Firstly, the maximum radius ball centered at the center of the parallelepiped is constructed. Secondly, for strategy A,
ball sequentially points corresponding to (
24)–(
25) are determined. For strategy B,
points based on (
26) are determined. For strategy C, we use the points (
28) plus the center of the parallelepiped, in total
points.
We used the multistart strategy with the generated points as the starting points. Strategies A, B, and C are compared with random strategies , , and of the corresponding sizes. In strategy , uniformly distributed points are generated; in strategy , the number of uniformly distributed points is ; and in strategy , the number of uniformly distributed points is . In all strategies, a parallel local search process based on the CONOPT solver was started.
In
Table 6,
Table 7,
Table 8 and
Table 9, the column “Duplicated Solutions” shows the number of points, which were found several times; the column “Different Solutions” shows the number of different found points; the column “Different Minimum Values” shows the number of different local minimum values among different solutions (i.e., there could be different local minimum points with the same objective value); the column “Record Value” shows the value of the objective function at the best point; in the column “Global Minimum,” the sign “+” means that the global minimum was found, otherwise the sign “−” is used; and the column “Time” shows the total solution time in seconds. Testing was performed on an Intel Core i7-3610QM computer (2.3 GHz, 8 GB DDR3 memory). All computations were done in GAMS Demo version.
Strategies C and were used for dimensions and , since they are of exponential complexity.
Griewank function. Consider the optimization problem
Global minimum
. Testing results are given in
Table 6. Properties of the Griewank function are studied in [
18].
Table 6.
Testing results for the Griewank function.
Table 6.
Testing results for the Griewank function.
Strategy | Number of Starting Points | Duplicated Solutions | Different Solutions | Different Minimum Values | Record Value | Global Minimum (+/−) | Time (s) |
---|
|
A | 7 | 0 | 7 | 7 | 0.473 | − | 0.531 |
| 7 | 0 | 7 | 7 | 0.418 | − | 0.500 |
B | 11 | 0 | 11 | 4 | 0.118 | − | 0.764 |
| 11 | 0 | 11 | 11 | 0.024 | − | 0.843 |
C | 33 | 0 | 33 | 33 | 0.000 | + | 2.482 |
| 33 | 0 | 33 | 27 | 0.000 | + | 2.559 |
|
A | 12 | 1 | 11 | 11 | 0.000 | + | 0.858 |
| 12 | 4 | 8 | 8 | 0.000 | + | 0.889 |
B | 21 | 0 | 21 | 21 | 0.000 | + | 1.388 |
| 21 | 5 | 16 | 13 | 0.000 | + | 1.373 |
C | 1025 | 512 | 513 | 208 | 0.000 | + | 98.109 |
| 1025 | 499 | 526 | 202 | 0.000 | + | 92.259 |
|
A | 52 | 38 | 14 | 14 | 0.000 | + | 4.680 |
| 52 | 51 | 1 | 1 | 0.000 | + | 3.573 |
B | 101 | 11 | 90 | 84 | 0.000 | + | 8.548 |
| 101 | 100 | 1 | 1 | 0.000 | + | 8.596 |
|
A | 102 | 91 | 11 | 11 | 0.000 | + | 8.938 |
| 102 | 101 | 1 | 1 | 0.000 | + | 9.142 |
B | 201 | 42 | 159 | 149 | 0.000 | + | 22.089 |
| 201 | 200 | 1 | 1 | 0.000 | + | 19.968 |
|
A | 302 | 184 | 118 | 76 | 0.000 | + | 38.923 |
| 302 | 301 | 1 | 1 | 0.000 | + | 37.768 |
B | 601 | 269 | 332 | 286 | 0.000 | + | 83.617 |
| 601 | 600 | 1 | 1 | 0.000 | + | 76.893 |
|
A | 502 | 398 | 104 | 71 | 0.000 | + | 76.877 |
| 502 | 501 | 1 | 1 | 0.000 | + | 76.581 |
B | 1001 | 595 | 406 | 332 | 0.000 | + | 169.198 |
| 1001 | 1000 | 1 | 1 | 0.000 | + | 138.054 |
Rastrigin function. Consider the optimization problem
Global minimum
. Testing results are given in
Table 7.
Let us make some comments on the results in
Table 7. A uniform distribution of the starting points happened to be very inefficient: the best solution is very far from the optimum. Take, for example, the case
. Strategy A found 302 different local minima with 11 different objective function values. Checking the list of local minimum points shows that there are 78 different local minimum points, with the best value being 0.995. Therefore, strategy A shows that there are quite a number of different local minima with objective value close to the optimal one. Formally, the same can be said about strategies
and
. These random strategies also found a large number of different local minima; however, the objective function values are very far from the optimal value.
Schwefel function. Consider the optimization problem
Global minimum
. Testing results are given in
Table 8.
Table 7.
Testing results for the Rastrigin function.
Table 7.
Testing results for the Rastrigin function.
Strategy | Number of Starting Points | Duplicated Solutions | Different Solutions | Different Minimum Values | Record Value | Global Minimum (+/−) | Time (s) |
---|
|
A | 7 | 0 | 7 | 7 | 0.995 | − | 0.780 |
| 7 | 0 | 7 | 7 | 18.904 | − | 0.515 |
B | 11 | 1 | 10 | 4 | 0.995 | − | 0.781 |
| 11 | 0 | 11 | 11 | 3.979 | − | 0.765 |
C | 33 | 1 | 32 | 23 | 0.000 | + | 2.527 |
| 33 | 0 | 33 | 27 | 17.909 | − | 2.480 |
|
A | 12 | 0 | 12 | 9 | 0.995 | − | 0.858 |
| 12 | 0 | 12 | 12 | 39.798 | − | 0.890 |
B | 21 | 2 | 19 | 5 | 0.000 | + | 1.576 |
| 21 | 0 | 21 | 21 | 39.798 | − | 1.638 |
C | 1025 | 53 | 972 | 162 | 0.000 | + | 101.790 |
| 1025 | 0 | 1025 | 680 | 21.889 | − | 96.971 |
|
A | 52 | 0 | 52 | 9 | 0.000 | + | 3.354 |
| 52 | 0 | 52 | 52 | 198.992 | − | 3.604 |
B | 101 | 16 | 85 | 7 | 0.000 | + | 8.549 |
| 101 | 0 | 101 | 101 | 198.992 | − | 8.347 |
|
A | 102 | 0 | 102 | 11 | 0.995 | − | 12.620 |
| 102 | 0 | 102 | 102 | 397.983 | − | 10.188 |
B | 201 | 43 | 158 | 6 | 0.000 | + | 21.013 |
| 201 | 0 | 201 | 200 | 397.983 | − | 20.124 |
|
A | 302 | 0 | 302 | 11 | 0.995 | − | 37.378 |
| 302 | 0 | 302 | 302 | 1193.949 | − | 39.503 |
B | 601 | 87 | 514 | 7 | 0.000 | + | 79.701 |
| 601 | 0 | 601 | 601 | 1193.949 | − | 74.911 |
|
A | 502 | 3 | 499 | 11 | 0.000 | + | 76.995 |
| 502 | 0 | 502 | 501 | 1989.915 | − | 36.331 |
B | 1001 | 153 | 842 | 7 | 0.000 | + | 171.975 |
| 1001 | 0 | 1001 | 1000 | 1989.915 | − | 168.044 |
Table 8.
Testing results for the Schwefel function.
Table 8.
Testing results for the Schwefel function.
Strategy | Number of Starting Points | Duplicated Solutions | Different Solutions | Different Minimum Values | Record Value | Global Minimum (+/−) | Time (s) |
---|
|
A | 7 | 0 | 7 | 7 | 929.319 | − | 0.515 |
| 7 | 0 | 7 | 7 | 475.270 | − | 0.531 |
B | 11 | 1 | 10 | 7 | 238.915 | − | 0.764 |
| 11 | 0 | 11 | 11 | 455.533 | − | 0.749 |
C | 33 | 1 | 32 | 22 | 118.438 | − | 2.199 |
| 33 | 0 | 33 | 31 | 455.533 | − | 2.480 |
|
A | 12 | 0 | 12 | 12 | 1562.522 | − | 0.905 |
| 12 | 0 | 12 | 12 | 1383.337 | − | 0.904 |
B | 21 | 4 | 17 | 7 | 0.000 | + | 1.388 |
| 21 | 0 | 21 | 21 | 1223.898 | − | 1.591 |
C | 1025 | 165 | 860 | 105 | 0.000 | + | 99.997 |
| 1025 | 5 | 1020 | 872 | 651.829 | − | 100.121 |
|
A | 52 | 0 | 52 | 52 | 2349.118 | − | 3.900 |
| 52 | 0 | 52 | 52 | 8164.279 | − | 4.040 |
B | 101 | 25 | 76 | 23 | 0.000 | + | 8.502 |
| 101 | 0 | 101 | 101 | 7859.295 | + | 7.878 |
|
A | 102 | 0 | 102 | 102 | 296.108 | − | 9.016 |
| 102 | 0 | 102 | 102 | 16,993.912 | − | 8.611 |
B | 201 | 55 | 146 | 31 | 0.000 | + | 21.231 |
| 201 | 0 | 201 | 201 | 16,948.095 | − | 18.441 |
|
A | 302 | 0 | 302 | 294 | 5909.961 | − | 33.119 |
| 302 | 0 | 302 | 302 | 53,468.437 | − | 32.479 |
B | 601 | 215 | 386 | 39 | 0.000 | + | 68.984 |
| 601 | 0 | 601 | 601 | 50,650.766 | − | 69.732 |
|
A | 502 | 3 | 502 | 483 | 214.513 | − | 70.747 |
| 502 | 0 | 502 | 502 | 89,847.498 | − | 74.974 |
B | 1001 | 328 | 673 | 43 | 0.000 | + | 168.498 |
| 1001 | 0 | 1001 | 1000 | 89,104.515 | − | 167.998 |
Again, pure random strategies show the worst results.
Levy function. Consider the optimization problem
Global minimum
. Testing results are given in
Table 9.
Table 9.
Testing results for the Levy function.
Table 9.
Testing results for the Levy function.
Strategy | Number of Starting Points | Duplicated Solutions | Different Solutions | Different Minimum Value | Record Value | Global Minimum (+/−) | Time (s) |
---|
|
A | 7 | 0 | 7 | 5 | 0.001 | − | 0.546 |
| 7 | 0 | 7 | 7 | 0.337 | − | 0.515 |
B | 11 | 1 | 10 | 6 | 1.064 | − | 0.858 |
| 11 | 0 | 11 | 10 | 1.064 | − | 0.764 |
C | 33 | 0 | 33 | 28 | 0.0005 | − | 2.465 |
| 33 | 0 | 33 | 31 | 0.0004 | − | 2.446 |
|
A | 12 | 1 | 11 | 4 | 1.064 | − | 0.858 |
| 12 | 0 | 12 | 12 | 0.937 | − | 0.983 |
B | 21 | 1 | 20 | 5 | 1.064 | − | 1.575 |
| 21 | 0 | 21 | 19 | 0.0005 | − | 1.388 |
C | 1025 | 0 | 1025 | 252 | 0.0004 | − | 94.849 |
| 1025 | 1 | 1024 | 568 | 0.0004 | − | 94.817 |
|
A | 52 | 1 | 51 | 7 | 0.0005 | − | 4.197 |
| 52 | 0 | 52 | 51 | 0.0005 | − | 3.728 |
B | 101 | 1 | 100 | 4 | 1.064 | − | 8.502 |
| 101 | 0 | 101 | 98 | 0.001 | − | 8.377 |
|
A | 102 | 1 | 101 | 4 | 0.0005 | − | 8.486 |
| 102 | 0 | 102 | 101 | 0.0005 | − | 8.361 |
B | 201 | 1 | 200 | 5 | 1.064 | − | 18.939 |
| 201 | 0 | 201 | 199 | 0.002 | − | 19.355 |
|
A | 302 | 1 | 301 | 6 | 0.937 | − | 33.665 |
| 302 | 0 | 302 | 302 | 1.064 | − | 40.778 |
B | 601 | 1 | 600 | 12 | 1.064 | − | 76.332 |
| 601 | 0 | 601 | 601 | 1.064 | − | 75.629 |
|
A | 502 | 1 | 501 | 7 | 1.064 | − | 74.210 |
| 502 | 0 | 502 | 502 | 1.064 | − | 94.257 |
B | 1001 | 1 | 1000 | 20 | 0.0005 | − | 168.590 |
| 1001 | 0 | 1001 | 1000 | 1.064 | − | 171.942 |
The Levy function was the most difficult testing case for all strategies. Not one of them could determine the global minimum. Nevertheless, strategies A and B are relatively efficient in high-dimensional cases.
The total testing showed that the most effective was strategy B, in terms of both finding the best solution and computational efforts. This effect can be explained in the following way: strategy B explores the total area of the feasible set more efficiently than the others.