Next Article in Journal
Optimization of Magnetic Pump Impeller Based on Blade Load Curve and Internal Flow Study
Next Article in Special Issue
A Stabilisation System Synthesis for Motion along a Preset Trajectory and Its Solution by Symbolic Regression
Previous Article in Journal
Some Generalized Entropy Ergodic Theorems for Nonhomogeneous Hidden Markov Models
Previous Article in Special Issue
Decision-Making on the Diagnosis of Oncological Diseases Using Cost-Sensitive SVM Classifiers Based on Datasets with a Variety of Features of Different Natures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Allocation of Starting Points in Global Optimization Problems

1
Department of Applied Mathematics, Melentiev Energy Systems Institute, Lermontov St. 130, 664033 Irkutsk, Russia
2
Scientific and Educational Center “Artificial Intelligence Technologies”, Bauman Moscow State Technical University, 2nd Baumanskaya St., 5, 105005 Moscow, Russia
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(4), 606; https://doi.org/10.3390/math12040606
Submission received: 18 December 2023 / Revised: 11 February 2024 / Accepted: 12 February 2024 / Published: 18 February 2024

Abstract

:
We propose new multistart techniques for finding good local solutions in global optimization problems. The objective function is assumed to be differentiable, and the feasible set is a convex compact set. The techniques are based on finding maximum distant points on the feasible set. A special global optimization problem is used to determine the maximum distant points. Preliminary computational results are given.
MSC:
49M37; 90C30; 65K05

1. Introduction

Within the concept of a “smart” digital environment, methods of mathematical modeling and machine learning are actively used to design and implement digital twins of complex technical, technological, and organizational systems. In this case, it is usually necessary to solve complex global optimization problems to automate the selection of effective structures and parameters of the corresponding models of these digital twins. The effectiveness of global optimization methods depends significantly on the choice of the initial set of solutions, which are subsequently used to find the global optimum or a good local optimum that approximates the global one. This is especially important when using global optimization methods for the continuously differentiable functions of real variables, because in this case, it is possible to obtain optimal solutions guaranteed by the strict mathematical apparatus of applied mathematics.
Let a differentiable function f : R n R and a convex compact set X R n with a nonempty interior, int ( X ) , be given. The problem considered in this paper consists in finding a good local minimum using the multistart strategy. In order achieve this, it is necessary to allocate p starting points x 1 , , x p in X, such that they cover X “more or less uniformly”. The proposed multistart strategy is based on the CONOPT solver [1].
Various uniform sampling procedures can be used for this purpose. A survey of special methods for allocation points on spheres is presented in [2]. If X is a polytope, sampling based on simplicial decomposition of X is applied, as given in [3]. In [4], a class of Markov chain Monte Carlo (MCMC) algorithms for distribution points on polytopes is described. In a more general case, when X is a convex body, a random walk strategy [5] based on the MCMC technique is successfully applied. A brief review of different kinds of random walk can be found in [4]. However, uniform random sampling algorithms are of exponential complexity [6]. Uniform sampling is usually used for the approximate calculation of an integral or volume of X. We are interested in finding a good local solution in global optimization problems. The most attractive feature of uniform sampling consists in the following: a global minimum solution can be found with a probability of one as the length of the sampling tends to infinity. However, due to the specifics of high-dimensional spaces [7], random sampling is not efficient from a practical point of view. Nevertheless, uniform sampling continues to draw attention, and investigations on this topic are of serious interest [8]. Approaches based on the p-location problem [9] and p-center methodology [10] can also be used for solving the problems considered in our paper. However, we aimed to check the efficiency of a global optimization approach.
In our paper, we propose a procedure for the good allocation of points on a convex compact set X. The idea is to use a special global optimization problem as an auxiliary one for allocation. The special global optimization problem consists in maximizing the Euclidean norm plus a linear term over a convex compact set. Because of the particular form of the problem, it can be solved to global optimality for a sufficiently large number of variables, for example, for n 30 50 . In doing so, we achieve a better covering of set X by a family of points. We believe that a combination of the proposed approach and advanced metaheuristics [11] will be of serious practical importance.
The first approach. The most attractive statement of the problem can be formalized as follows:
t max , t = x i x j 2 , x i , x j X , 1 i < j p .
Problem (1) means that it is necessary to allocate p points such that the distance between any two points is the same and is as maximal as possible. In this case, the set { x 1 , , x p } is called the set of equidistant points. However, it is well known that Problem (1) is solvable only if p n + 1 . When p = n + 1 , then points { x 1 , , x n + 1 } are vertices of a regular simplex. If x i x j = δ , 1 i < j n + 1 , all points x i belong to the sphere of radius
R = δ n 2 ( n + 1 )
centered at
x c = 1 n + 1 j = 1 n + 1 x j .
However, in many applications, it is necessary to allocate more than n + 1 points.
The second approach. We move to another problem of the following form:
min 1 i < j p { x i x j 2 } max , x i , x j X .
We want to allocate p points such that the minimum distance between any two of them is as maximal as possible. Problem (3) always has a solution since the objective function is continuous and the feasible set is nonempty and compact. The objective function is nonsmooth, but this can be avoided by the standard reduction of Problem (3) to the following one:
t max , t x i x j 2 , x i , x j X , i , j = 1 , , p , j > i .
Two main difficulties are unavoidable when solving Problem (4). Firstly, the number of variables is equal to p ( p 1 ) n 2 . Secondly, the feasible domain is nonconvex. Hence, we have to overcome the nonconvexity of the feasible domain, but we are seriously restricted in dimension n.
The third approach. Given p 1 points v i X , find point v p as a solution to the problem
φ p ( x ) = min 1 j p 1 { x v j 2 } max , x X .
As a result, set X is covered by p balls centered at v 1 , , v p with radius r p equal to φ p ( v p ) . We start from an arbitrary point v 1 X and sequentially determine points v 2 , v 3 , and functions φ 2 , φ 3 , according to (5). Let θ ( x ) = 0 x X be identical a zero function on X. The theoretical foundation of the approach based on solving Problem (5) is given by the following theorem.
Theorem 1.
The sequence of functions φ p , p = 2 , 3 , uniformly converges to function θ over X.
Proof. 
Functions φ p , p = 2 , are Lipschitz functions with the same Lipschitz constant. Therefore, φ p , p = 2 , is an equicontinuous sequence of functions. Since X is a compact set, then φ p ( x ) D ( X ) < + , where D ( X ) is the diameter of X, and functions φ p , p = 2 , are uniformly bounded. By construction φ p ( x ) φ p 1 ( x ) x X . Hence, due to the Arzelà–Ascoli theorem, φ p , p = 2 , is a sequence of functions uniformly convergent to a continuous function η : η ( x ) φ p ( x ) x X , p = 2 , . By construction φ p ( v i ) = 0 i < p ; hence,
η ( v p ) = 0 p .
Assume that lim p φ p ( v p ) = ρ > 0 . Let v p j , j = 1 , 2 , be a subsequence convergent to a point v such that η ( v ) = ρ . From (6), due to the continuity of η , we have lim j η ( v p j ) = η ( v ) = 0 , a contradiction, which proves the theorem. □
Hence, we can theoretically achieve the covering of X by a number of balls with sufficiently small radius. In practice, especially in high dimensions we restrict ourselves to a reasonable value of p.
Let us rewrite Problem (5) in a more computationally tractable form. Point v p is the maximum distant point from points v j , j = 1 , , p 1 . Since x v j 2 = x 2 2 x v j + v j 2 and min 1 j p 1 { x 2 2 x v j + v j 2 } = x 2 + min 1 j p 1 { v j 2 2 x v j } , we can rewrite Problem (5) in the form
x 2 + t max , t v j 2 2 x v j , j = 1 , , p 1 , x X .
The feasible domain in (7) is convex, and the objective function is convex. Therefore, we have a convex maximization problem, and special advanced methods [12] can be used for solving (7).
In our paper, we develop the iterative scheme of the third approach based on solving problems of type (7). The description is the following. Take an arbitrary first point v 1 . The other points are determined according to the solutions to problem (7) for p = 2 , 3 , . Points are found sequentially: the new point is determined after finding the previous ones. This is why we call points v 1 , v 2 , , v p obtained on the base of the iterative solution of problem (7) sequentially maximum distant pointsor simply sequentially distant points.
Notation:
e j , j = 1 , , n are unit vectors with 1 on the j-th position and 0 on the others;
x j is the j-th component of vector x R n ;
x i is the i-th vector in a sequence of n-dimensional vectors x 1 , , x i , ;
x y is the dot (inner) product of vectors x , y R n .

2. Allocation of Points in the Unit Ball

Assume that X is the unit ball, that is,
X = B = { x R n : x 2 1 } .
In this case, Problem (5) can be solved analytically. The obtained points are called ball sequentially distant points. We start with the problem of setting the n + 1 equidistant point in B that is equivalent to inscribing a regular simplex in B. The distance between points can be determined from (2) with R = 1 ,
δ = 2 ( n + 1 ) n = 2 1 + 1 n .
Since the points are equidistant:
x i x j 2 = x i x k 2 ( x k x j ) x i = 0 , 1 i < j < k n + 1 .
Due to the symmetry of B, we can set x 1 = e 1 = ( 1 , 0 , , 0 ) . Then, from (2),
x 1 j = x 1 k , 2 j < k n + 1 .
Since points x j , j = 2 , , n + 1 belong to the intersection of a plane orthogonal to x 1 and a boundary of B, we also can choose the point x 2 as a point with maximal zero components. Therefore, we set x l 2 = 0 , l = 3 , , n . The distance x 1 x j 2 = ( 1 x 1 2 ) 2 + ( x 2 2 ) 2 = δ 2 , and ( x 1 2 ) 2 + ( x 2 2 ) 2 = 1 . From these two equations and (9), we obtain x 1 j = 1 n , j = 2 , , x 2 2 = ( n 1 ) ( n + 1 ) n 2 . Now, let us repeat the same consideration for the n 1 -dimensional ball centered at x 2 and obtained as an intersection of the plane { x R n : x 1 = 1 n } and B. Then, we determine x 3 = 1 n , n + 1 n · 1 n ( n 1 ) , n + 1 n · n 2 n , 0 , , 0 . After repeating this consideration similarly for the remaining cases, we obtain the final description of the equidistant point in the unit ball:
x j k = n + 1 n · 1 ( n j + 2 ) ( n j + 1 ) , 1 j < k , n + 1 n · n k + 1 n k + 2 , j = k , 0 , k < j n , , k = 1 , , n + 1 .
Let us switch now to the construction of the sequentially maximum distant points. Again, due to the symmetry of B, the starting point v 1 = e 1 . The next point, which is denoted by v n + 1 , is determined as v n + 1 = arg max { x v 1 2 : x B } = e 1 . Point v 2 is a solution to the problem
min { x v 1 2 , x v n + 1 2 } max , x B .
Let us introduce the sets
X 21 = { x B : x v 1 2 x v n + 1 } = { x B : x 1 0 } ,
X 22 = { x B : x v n + 1 2 x v 1 } = { x B : x 1 0 } .
Then, solving Problem (11) is reduced to solving the following two problems:
f 21 ( x ) = x v 1 2 max , x X 21
and
f 22 ( x ) = x v n + 1 2 max , x X 22 .
Since f 21 ( x ) = x 2 2 x 1 + 1 2 2 x 1 x B , the upper bound for the maximum value in (12) is given by max { 2 2 x 1 : x X 21 } = 2 and is achieved, for example, at point e 2 . The value f 21 ( e 2 ) = 2 . Therefore, e 2 is a solution to problem (12). Similarly, f 22 ( x ) = x 2 + 2 x 1 + 1 2 + 2 x 1 x B , the upper bound max { 2 + 2 x 1 : x X 22 } = 2 is also achieved at e 2 and f 22 ( e 2 ) = 2 . Hence, point e 2 is a solution to problem (13). The latter means that e 2 is a solution to Problem (11), and we can set v 2 = e 2 .
Consider now the problem
min { x v 1 2 , x v 2 2 , x v n + 1 2 } max , x B .
Determine sets
X 31 = { x B : x v 1 2 x v 2 2 , x v 1 2 x v n + 1 2 } =
= { x B : x 1 + x 2 0 , x 1 0 } ,
X 32 = { x B : x v 2 2 x v 1 2 , x v 2 2 x v n + 1 2 } = { x B : x 2 | x 1 | } ,
X 33 = { x B : x v n + 1 2 x v 1 2 , x v n + 1 2 x v 2 2 } =
= { x B : x 1 + x 2 0 , x 1 0 } .
Problem (14) is reduced to find solutions to the three auxiliary problems
f 3 i ( x ) = x v i 2 max , x X 3 i , i = 1 , 2 ,
f 33 ( x ) = x v n + 1 2 max , x X 33 .
Again, f 31 ( x ) 2 2 x 1 x X 31 and f 33 ( x ) 2 + 2 x 1 x X 33 . In both cases, the maximum value 2 is attained at the point e 2 . For the last auxiliary problem, we have f 32 ( x ) 2 2 x 2 x X 32 , that is, the corresponding maximum value cannot be greater than 2. Therefore, point v n + 2 = e 2 is a solution to Problem (14).
So far, four points v i = e i , v n + i = e i , i = 1 , 2 are obtained. We are going to prove by induction that the same principle is true for 2 n points: v i = e i , v n + i = e i , i = 1 , , n . The basis of induction: the hypothesis is true for k = 2 . The induction step: let us prove that the hypothesis is true for the case k + 1 . Consider the problem
min 1 i k { x v i 2 , x v n + i 2 } max , x B .
Define for i K = { 1 , , k } the following sets
X k + 1 , i = { x B : x v i 2 x v j 2 , j K { i } , x v i 2 x v n + j 2 , j K } ,
X k + 1 , n + i = { x B : x v n + i 2 x v j 2 , j K , x v n + i 2 x v n + j 2 , K { i } } .
Then, Problem (15) disintegrates into 2 k problems
f k + 1 , i = x v i 2 max , x X k + 1 , i = { x B : x i 0 , x i | x j | , j K { i } } ,
f k + 1 , n + i = x v n + i 2 max , x X k + 1 , n + i = { x B : x i 0 , x i | x j | , j K { i } } .
As above, f k + 1 , i ( x ) 2 2 x i 2 x X k + 1 , i , i K , f k + 1 , i ( e k + 1 ) = 2 and e k + 1 X k + 1 , i i K . Similarly, f k + 1 , n + i ( x ) 2 + 2 x i 2 x X k + 1 , n + i , i K . Therefore, we can take e k + 1 as a solution to Problem (15) and set v k + 1 = e k + 1 .
Let us consider now the next problem:
f k + 2 ( x ) = min min 1 j k + 1 x v j 2 , min 1 j k x v n + j 2 max , x B .
Using the same arguments as earlier, it is easy now to see that f k + 2 ( x ) 2 x B and f k + 2 ( e k + 1 ) = 2 . Hence, we can accept e k + 1 as a solution to (18) and set v n + k + 1 = e k + 1 .
Therefore, the first 2 n points are determined as
v i = e i , v n + i = e i , i = 1 , , n .
The maximum distance between any two points in (19) is equal to 2, and the minimum distance between any two points is equal to 2 .
Let us now determine point v 2 n + 1 . In order to do this, we have to solve the problem
f 2 n + 1 ( x ) = min { min 1 j n x v j 2 , min 1 j n x v n + j 2 } max , x B .
Rewrite f as follows:
f 2 n + 1 ( x ) = min 1 j n min { x v j 2 , x v n + j 2 } = min 1 j n x 2 + 1 2 | x j | =
= x 2 + 1 2 max 1 j n | x j | = x 2 + 1 2 x x 1 x + 1 2 x =
= ( x 1 2 ) x + 1 .
The maximal value of the expression in (21) over B is obviously equal to 1 and is achieved at the origin 0 = ( 0 , , 0 ) . From (19) and (20), we have f 2 n + 1 ( 0 ) = 1 ; hence, v 2 n + 1 = 0 . The maximum distance between any two points in the set { v i , v n + i , i = 1 , , n , v 2 n + 1 } is equal to 2 , and the minimum distance is equal to 1.
The solution to the problem
f 2 n + 2 ( x ) = min { min 1 j n x v j 2 , min 1 j n x v n + j 2 , x 2 } max , x B .
is given by the point v 2 n + 2 = ( 1 n , 1 n , , 1 n ) , since f 2 n + 2 ( v 2 n + 2 ) = 1 and f 2 n + 2 ( x ) 1 x B . Due to the symmetricity of B, the next 2 n 1 points are other vertices of cube C ˜ = { x R n : 1 n x j 1 n , j = 1 , , n } .
Finally, sequentially distant 2 n + 1 + 2 n points for the unit ball are given by
v i = e i , v n + i = e i , i = 1 , , n , v 2 n + 1 = ( 0 , , 0 ) ,
v 2 n + 1 + i , i = 1 , , 2 n , are   vertices   of   the   cube C ˜ .
The maximum distance between any two points is obviously equal to 1. Due to the symmetricity of the ball, the minimum distance can be determined as the distance between v 2 n + 2 and any point v j , j = 1 , , n . For example, v 2 n + 2 v 1 = 2 1 1 n . Points in (22) and (23) are calculated without solving the corresponding optimization problems.
The above procedures can be generalized for the allocation of points in a general ball B ( x c , R ) = { x R n : x x c R } .
Case A. Generalization of the n + 1 equidistant points. We add the center x c to the set of points and obtain the following n + 2 ball sequentially distant points v 1 , , v n + 2 with (10)
v j k = x j c R n + 1 n · 1 ( n j + 2 ) ( n j + 1 ) , 1 j < k , x j c + R n + 1 n · n k + 1 n k + 2 , j = k , x j c , 0 k < j n , , k = 1 , , n + 1 ,
v n + 2 = x c .
The obtained points are not equidistant. The maximum distance between any two points is equal to R, and the minimum distance is equal to R 2 1 + 1 n (see (8)).
Case B. Ball sequentially distant 2 n + 1 points. These points are just a direct generalization of (22),
v i = x c + R e i , v n + i = x c R e i , i = 1 , , n , v 2 n + 1 = x c .
The maximum distance is equal to R, and the minimum distance is equal to R 2 .
Case C. Ball sequentially distant 2 n + 2 n + 1 points. Introduce cube C ^ = { x R n : x j c R x j x j c + R , j = 1 , , n } . Then, the points are determined as follows:
v i = x c + R e i , v n + i = x c R e i , i = 1 , , n , v 2 n + 1 = x c ,
v 2 n + 1 + i , i = 1 , , 2 n , are   vertices   of   the   cube C ^ .
The maximum distance between any two points is equal to R, and the minimum distance is equal to R 2 1 1 n .
Let us compare the allocation of a ball sequentially 2 n + 1 from (26) without the center v 2 n + 1 and with a uniform distribution over a unit sphere. We take the minimum distance between two points as a measure of allocation efficiency: the greater minimum distance, the better the allocation. The uniform distribution over the unit sphere is obtained using normal distribution with mean 0 and standard deviation 1 by normalization. The minimum distance between two ball sequentially distant points is 2 1.414 for any n. If we uniformly distribute 200 points over the unit sphere in a 100-dimensional case, then the minimum distance is on average 1.098 (after 10 repetitions). Therefore, the ball sequentially distant points allocation is almost 40% better than the uniform allocation.

3. Mapping the Ball Sequentially Distant Points on a Compact Convex Set

Let X be a convex compact set defined by a system of inequalities
X = { x R n : g i ( x ) 0 , i = 1 , , m } ,
g i , i = 1 , , m are convex and twice continuously differentiable functions, and int ( X ) . We use the concept of an analytical center x a [13]. The point x a is the solution to the convex optimization problem
F ( x ) max , x X ,
F ( x ) = i = 1 m ln ( g i ( x ) ) , and F is a twice continuously differentiable concave function. Since int ( X ) , we have g i ( x a ) < 0 , i = 1 , , m , so the following ellipsoid can be defined:
E = { x R n : ( x x a ) H ( x x a ) 1 } ,
H = 2 F ( x a ) = i = 1 m 1 g i 2 ( x a ) g i ( x a ) g i ( x a ) 1 g i ( x a ) 2 g i ( x a ) .
Then, X E . The Hessian H can be represented as H = U Λ U , U is an n × n orthonormal matrix with eigenvectors of H as columns, and Λ is an n × n diagonal matrix with eigenvalues λ i > 0 , i = 1 , , n on the main diagonal. Let us introduce new variables y = Λ 1 2 U ( x x a ) . Then, in variables y, ellipsoid E in (30) is the unit ball B = { y R n : y y 1 } . Let { v i , i = 1 , , N } be ball sequentially distant points in y-space constructed in correspondence to the cases A ( N = n + 2 ), B ( N = 2 n + 1 ) or C ( N = 2 n + 2 n + 1 ) from the previous section. In the x-space, we define points
w i = x a + U Λ 1 2 v i , i = 1 , , N .
Images w i of the ball equidistant points ( i = 1 , , n + 1 ) are solutions to the problem
t max , t = ( x i x j ) H ( x i x j ) , x i , x j E , 1 i < j n + 1 .
Images w i of the ball sequentially distant points (cases D or C, i = 1 , , N , n = 2 n + 1 or N = 2 n + 2 n + 1 ) are solutions to the problem
min 1 j i 1 { ( x w j ) H ( x w j ) } max , x j E .
Example 1.
Consider the following problem:
f ( x 1 , x 2 ) = 1 + cos ( 12 ( x 1 0.7 ) 2 + ( x 2 3 ) 2 ) 0.5 ( ( x 1 0.7 ) 2 + ( x 2 3 ) 2 ) + 2 min , x X ,
X = { x R 2 : g 1 ( x ) = x 1 2 x 2 0 , g 2 ( x ) = x 1 + 3 x 2 10 0 , g 3 ( x ) = 7 x 1 + x 2 0 } ,
f is the shifted drop-wave function [14], and global minimum x = ( 0.7 , 3.0 ) , f ( x ) = 1 . After solving the corresponding problem (29), we determine the analytical center x a = ( 0.982 , 2.125 ) and matrices
H = 2 F ( x a ) = 6.806 1.909 1.909 1.210 , U = 0.956 0.296 0.296 0.956 , Λ = 7.395 0 0 0.621 .
We use Case C from the previous section, so N = 2 n + 2 n + 1 = 9 for n = 2 . Points v i , i = 1 , , 9 are determined in (27) and (28) with R = 1 , points w i = x a + U Λ 1 2 v i , i = 1 , . . . 9 , points x , i are stationary points determined by the CONOPT solver [1] starting from points w i , and f , i = f ( x , i ) are the corresponding objective function values (see Table 1).
We can see from Table 1 that the global minimum point was determined three times. In the other six cases, different stationary points were found with two points x , 2 and x , 3 with the same value −0.656, and two points x , 5 and x , 8 with the value −0.885.
Geometrical interpretation of points w i , i = 1 , , 9 and the ellipsoid as a dashed curve are given in Figure 1.
The advantage of the proposed approach consists in the following: well-allocated points in “narrow and arbitrary oriented” convex compact sets can be determined since the ellipsoid (30) provides a good inner approximation of X.
Example 2.
We extend the proposed approach to solve the following problem [15]:
f ( x ) = 5 j = 1 4 x j 5 j = 1 4 x j 2 j = 5 13 x j min .
Set X is determined by the following system:
2 x 1 + 2 x 2 + x 10 + x 11 0 ,
2 x 1 + 2 x 3 + x 10 + x 12 0 ,
2 x 2 + 2 x 3 + x 11 + x 12 0 ,
2 x 4 x 5 + x 10 0 ,
2 x 6 x 7 + x 11 0 ,
2 x 8 x 9 + x 12 0 ,
8 x 1 + x 10 0 ,
8 x 2 + x 11 0 ,
0 x j 1 , j = 1 , , 9 ,
0 x j 100 , j = 10 , 11 , 12 ,
0 x 13 1 .
Points v i , i = 1 , , 2 n + 1 = 27 were determined according to Case B (26). Points w i , i = 1 , , 27 were computed by (31), and x a is the analytical center of X. Since the objective function is nonconvex and quadratic, the global minimum is achieved on the boundary of X. Points u i were obtained as intersections of rays x a + τ ( w i x a ) , τ 0 , i = 1 , , 27 with the boundary of X. Then, the multistart procedure started from points u i was applied, and the global minimum x = ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 3 , 3 , 3 , 1 ) , f ( x ) = 15 was found.

4. Allocation of an Arbitrary Given Number of Points

In the previous section, the number of allocated points was equal to n + 2 or 2 n + 1 or 2 n + 2 n + 1 . The allocation procedure was based on setting the points in a ball. In this section, we assume that the number of allocated points is p, which is different from the previous values, and, more importantly, the allocation procedure is not connected to the ball. The price for such an approach is a sequential solution to a special global optimization problem.
Problem (7) is to be iteratively solved as was announced in Section 1. This problem is a problem of the global maximization of a convex quadratic function over a bounded polyhedral set. Hence, special methods can be used for the solution.
Let the number p of allocated points be given. The first point v 1 can be chosen arbitrarily. The remaining points are found by solving the global optimization problem
v k + 1 Arg max { x 2 + t : 2 x v j + t v j 2 , j = 1 , , k , x X } , k = 1 , , p 1 .
In solving the examples below, we used the solver SCIP [16] for finding the global maximum in Problem (32).
Example 3.
The number of allocated points p = 16 , set X = { ( x 1 , x 2 ) : x 1 + 2 x 2 2 , x 1 0 , x 2 0 } . Since the feasible set is polytope, it was decided to start from the vertex v 1 = ( 0 , 0 ) . In Figure 2, a geometrical interpretation of the allocated points is given.
In Table 2, the coordinates of vectors v i are given, and r 2 is the squared maximum distance from the current point to the previous ones.
Example 4.
The number of allocated points p = 16 , set X = { ( x x , x 2 ) : x 1 + x 2 3 , x 1 + 2 x 2 15 , 2 x 1 x 2 10 , 3 x 1 5 x 2 15 } . The starting vertex v 1 = ( 0 , 3 ) . The determined vertices are shown in Figure 3.
Table 3 contains the coordinates of v i and again the squared maximum distances ( r 2 ) from the current point to the previously found ones.
Example 3 shows that vertices of the given polytope are not necessarily covered by points v i . The vertex ( 3 , 6 ) is not covered.
In practice, it is enough to find a new point, which is sufficiently far from the previous points. Hence, a good local solver can be used for finding the solution to Problem (32). In the testing below, we used the IPOPT solver [17] for this purpose. In the testing problems, the feasible set X was a bounded polyhedral set
X = { x R n : A x b , x ̲ x x ¯ } ,
with m × n matrix A. Vectors b R n , x ̲ , x ¯ R n were determined randomly in a such a way that int ( X ) . The first two points v 1 and v 2 are approximate solutions to the problem
x y 2 max , x X , y X ,
where v 1 = x , v 2 = y . For solving Problem (33), the SCIP solver was used with the solution time limitation increased by 30 s. The number of points was equal to 100. The solution to the corresponding problems (32) for k = 3 , , 99 were obtained by the IPOPT solver. The last point, v 100 , was obtained by the SCIP solver with the time limitation increased to 300 s. In Table 4, n is the number of variables, m is the number of rows in matrix A, Δ 12 = v 1 v 2 , δ is the obtained maximum distance from the last point v 100 to the previous ones, and T is the solving time in seconds. Testing was performed on IntelCore i7-3610QM (2.3 Ghz, 8 GB DDR3 memory).
In problems with five and ten variables, globally optimal solutions were found. In other words, for example, when n = 10 , the diameter of X was equal to 1931.523, and the exact maximum distance from the 99 previous points to the point x 100 was equal to 608.201. In higher-dimensional problems, approximate solutions were determined.

5. Two Kinds of Multistart Strategy

We know that the feasible set X can be covered by p balls with centers at v 1 , , v p and with radius r p = φ ( v p ) (see Problem (5)). Consider the p optimization problem
f ( x ) min , x v j 2 r p 2 , x X ,
where j = 1 , , p . Let x , j , j = 1 , , p be points obtained as a result of the application of the CONOPT solver to Problem (34) using v j , j = 1 , , p as the starting points. Compare Problem (34) with the following one:
f ( x ) min , x X ,
Let x , j , j = 1 , , p be solutions of (35) obtained also by the CONOPT solver applied p times also from points v j , j = 1 , , p as the starting points. Points x , j , j = 1 , , p have a “local nature” because of constraints x v j r p 2 , j = 1 , , p . Therefore, we can make the following assumption: the set Ω p = { x , j : j = 1 , , p } contains more different local minima than the set Ω p = { x , j : j = 1 , , p } . It is not difficult to construct an example, in which all points x , j , j = 1 , , p as well as points x , j , j = 1 , , p are points of different local minima. The first multistart strategy is connected to the construction of the sets Ω p . The second multistart strategy is connected to the construction of the sets Ω p . However, in practice there can be a significant difference between these sets of points for particular cases. Let us consider the following examples.
Example 5.
Consider the Bird problem:
f ( x 1 , x 2 ) = ( x 1 x 2 ) 2 + e ( 1 sin ( x 1 ) ) 2 cos ( x 2 ) + e ( 1 cos ( x 2 ) ) 2 sin ( x 1 ) ,
x i [ 2 π , 2 π ] , i = 1 , 2 .
This problem has many local minima and two global minimum points, x g , 1 = ( 4.701 , 3.152 ) and x g , 2 = ( 1.582 , 3.130 ) with f ( x g , 1 ) = f ( x g , 2 ) = 106.765 . For p 5 , sets Ω p and Ω p do not contain global minimum points. When p = 6 , the set Ω 6 contains five different local minima, and one of them is a global minimum. The set Ω 6 contains four different local minima, and one of them is a global minimum. In total, the set Ω 6 Ω 6 contains six different points of minimum, and one of them is a global minimum. The set Ω 7 contains five local minima and two of them are global minima. The set Ω 7 contains the same of local minima as Ω 6 . In total, the set Ω 6 Ω 6 contains seven different local minima, and two of them are global minima.
Example 6.
Consider the Branin problem:
f ( x 1 , x 2 ) = 1.275 x 1 2 π 2 + 5 x 1 π + x 2 6 + 10 5 4 π cos ( x 1 ) cos ( x 2 ) +
+ log ( x 1 2 + x 2 2 + 1 ) + 10 ,
x i [ 5 , 15 ] , i = 1 , 2 .
The global minimum is unique, x g = ( 3.2 , 12.53 ) , f ( x g ) = 5.559 . When p 17 , the sets Ω 17 and Ω 17 do not contain the global minimum point. The set Ω 18 contains nine different local minima, and one of them is the global minimum. The set Ω 18 also contains nine different local minima, and one of them is the global minimum. Sets Ω 18 and Ω 18 do not coincide, and their union Ω 18 Ω 18 contains 10 different local minima, and one of them is the global minimum.
Example 7.
Consider the egg crate problem:
f ( x 1 , x 2 ) = x 1 2 + x 2 2 + 25 sin 2 ( x 1 ) + sin 2 ( x 2 ) ,
x i [ 5 , 10 ] .
The global minimum is unique, x g = ( 0 , 0 ) , f ( x g ) = 0 . The set Ω 5 contains five different local minima, and one of them is the global minimum. When p 4 , the sets Ω p do not contain the global minimum. As for the sets Ω p , they contain the global minimum for p 26 . The set Ω 26 contains twenty-five different local minima, and one of them is the global minimum. In comparison, the set Ω 26 contains eighteen different local minima, and one of them is the global minimum.
Example 8.
Consider the Mishra problem:
f ( x 1 , x 2 ) = sin 2 ( cos ( x 1 ) + cos ( x 2 ) ) 2 + cos 2 ( sin ( x 1 ) + sin ( x 2 ) ) 2 + x 1 2 +
+ 0.01 ( x 1 + x 2 ) ,
x i [ 10 , 10 ] , i = 1 , 2 .
The problem has the unique global minimum x g = ( 1.987 , 10 ) , f ( x g ) = 0.1198 . The sets Ω p contain the global minimum for p 6 , while the set Ω 6 contains five different local minima, and one of them is global. The sets Ω p do not contain the global minima for p 600 . The corresponding radius of each covering ball for p = 600 is equal to 0.625 . Hence, the Mishra problem has very many “narrow” points of local minima.
Example 9.
Consider the Price problem:
f ( x 1 , x 2 ) = 1 + sin 2 ( x 1 ) + sin 2 ( x 2 ) 0.1 e x 1 2 x 2 2 ,
x i [ 5 , 10 ] , i = 1 , 2 .
The global minimum is unique, x g = ( 0 , 0 ) , f ( x g ) = 0.9 . The sets Ω p contains the global minimum for p 26 . The set Ω p contains the global minimum for p 13 .
Example 10.
Consider the Shubert problem:
f ( x 1 , x 2 ) = i = 1 5 i cos ( i + 1 ) x 1 + i i = 1 5 i cos ( i + 1 ) x 2 + i ,
x i [ 10 , 10 ] , i = 1 , 2 .
There are many global minima, one of them being x g = ( 7.084 , 4.858 ) , f ( x g ) = 186.7309 . The sets Ω p start to contain a global minimum from p = 5 . The set Ω 5 contains only two different local minima, and one of them is global. The sets Ω p contain a global minimum when p 29 , and all twenty-nine local minima of the set Ω 29 are different.
Example 11.
Consider the Trefethen problem:
f ( x 1 , x 2 ) = 0.25 x 1 2 + 0.25 x 2 2 + e sin ( 50 x 1 ) sin ( 10 x 1 + 10 x 2 ) + sin ( 60 e x 2 ) +
+ sin 70 sin ( x 1 ) + sin sin ( 80 x 2 ) ,
x i [ 10 , 10 ] , i = 1 , 2 .
The global minimum is unique, x g = ( 0.0244 , 0.2106 ) , f ( x g ) = 3.3069 . This problem has very many local minima. For example, the set Ω 30 consists of thirty different local minima, with no global minimum among them. The set Ω 30 contains twenty-eight new different local minima in addition to the set Ω 30 , again with no global minimum among them. Therefore, the union Ω 30 Ω 30 contains the fifty-eight different local minima and no global minimum. Only for p 570 , the sets Ω p contain the global minimum. The set Ω 570 contains the five hundred seventy different local minima and no global minimum. Each radius of the five hundred seventy balls, which cover the feasible set, is equal to 0.625 .
In all considered examples, the following properties should be mentioned. As a rule, the sets Ω p need fewer points to detect a global minimum. Example 8 with the Mishra function provides a very remarkable confirmation of this assumption: only six points were used in the set Ω 6 to cover the global minimum, whereas even six hundred points were not enough to detect the global minimum in the case of the set Ω 600 . The price for such a behaviour is that many points in the sets Ω p are found several times, in contrast to the sets Ω p . We also have to keep in mind that in example 9 with the Price function, the situation is opposite: thirteen points to detect the global minimum for the set Ω 13 and twenty-six points to detect the global minimum for the set Ω 26 . The number of different local minimum points in the sets Ω p is usually larger than in the sets Ω p . Nevertheless, local minimum points in the sets Ω p being smaller in number, usually (not always) have lower objective function values.
Let us compare the sets Ω p and Ω p for all tested problems and for the same number of points p = 20 , that is, we compare the sets Ω 20 and Ω 20 . The results of the comparison are given in Table 5. Column N L ( N G ) shows the number N L of the different local minima in the corresponding sets Ω 20 , with N G being the number of global minima among them. Similarly, column N L ( N G ) shows the number N L of different local minima in the sets Ω 20 with the number N G of global minima among them. Column New N G shows the number of global minima in the sets Ω 20 Ω 20 (new global minima). Column New N L shows the number of local minima in the sets Ω 20 Ω 20 (new local minima). Column N L T ( N G T ) shows total number N L T of different local minima and total number N G T of different global minima obtained by determining both sets Ω 20 and Ω 20 . For example, for the Shubert problem, we have 13(3) in the column N L ( N G ) , which means that the corresponding set Ω 20 contains thirteen different local minimum points and three of them are global minimum points. In the column N L ( N G ) , we have 18(3) that means that the corresponding set Ω 20 contains eighteen different local minima and three of them are global minimum points. Column New N G shows that one new global minimum point is contained in the set Ω 20 in comparison to the set Ω 20 , and column New N L shows that the set Ω 20 contains fourteen new local minimum points in comparison to the set Ω 20 . Finally, in column N L T ( N G T ) , we have 27(4), which means that twenty-seven different local minimum points were determined, and four of them are global minimum points.
Assuming the differentiability of the objective function and finiteness of the set of local minima, it is not possible to assess the number of local minima. Therefore, we propose the following approach. Assess the number p of local minima from some additional practical considerations. Then, construct the set Ω p containing a good local minimum point or even a global minimum point. After that, construct the set Ω p to enlarge the number of local minima to catch situations similar to the Price function. Due to the very high efficiency of the CONOPT solver, finding the sets Ω p and Ω p is not too computationally demanding. We can obtain a practical assessment of the number of minima of the objective function by using such a mixture of these two kinds of the multistart strategy. If the number of total determined local minima is not very large (for example, many of them are found many times), then we can conclude that we performed a good exploration of the objective function. Otherwise, we can reach the conclusion that the objective function is of a very complicated structure.

6. Testing Sequentially Distant Points in Optimization Problems

We present the results of testing the comparative efficiency of using sequentially distant and randomly generated points in solving optimization problems. Three strategies, A, B, and C, based on the cases from Section 1, are tested. Optimization problems are problems of minimizing highly nonlinear functions over a box or parallelepiped. Firstly, the maximum radius ball centered at the center of the parallelepiped is constructed. Secondly, for strategy A, n + 2 ball sequentially points corresponding to (24)–(25) are determined. For strategy B, 2 n + 1 points based on (26) are determined. For strategy C, we use the points (28) plus the center of the parallelepiped, in total 2 n + 1 points.
We used the multistart strategy with the generated points as the starting points. Strategies A, B, and C are compared with random strategies Rnd A , Rnd B , and Rnd C of the corresponding sizes. In strategy Rnd A , n + 2 uniformly distributed points are generated; in strategy Rnd B , the number of uniformly distributed points is 2 n + 1 ; and in strategy Rnd C , the number of uniformly distributed points is 2 n + 1 . In all strategies, a parallel local search process based on the CONOPT solver was started.
In Table 6, Table 7, Table 8 and Table 9, the column “Duplicated Solutions” shows the number of points, which were found several times; the column “Different Solutions” shows the number of different found points; the column “Different Minimum Values” shows the number of different local minimum values among different solutions (i.e., there could be different local minimum points with the same objective value); the column “Record Value” shows the value of the objective function at the best point; in the column “Global Minimum,” the sign “+” means that the global minimum was found, otherwise the sign “−” is used; and the column “Time” shows the total solution time in seconds. Testing was performed on an Intel Core i7-3610QM computer (2.3 GHz, 8 GB DDR3 memory). All computations were done in GAMS Demo version.
Strategies C and Rnd C were used for dimensions n = 5 and n = 10 , since they are of exponential complexity.
Griewank function. Consider the optimization problem
f ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i min ,
x Π = { x R n : 600 x i 900 , i = 1 , , n } .
Global minimum x = ( 0 , , 0 ) , f ( x ) = 0 . Testing results are given in Table 6. Properties of the Griewank function are studied in [18].
Table 6. Testing results for the Griewank function.
Table 6. Testing results for the Griewank function.
StrategyNumber of Starting PointsDuplicated SolutionsDifferent SolutionsDifferent Minimum ValuesRecord ValueGlobal Minimum (+/−)Time (s)
n = 5
A70770.4730.531
Rnd A 70770.4180.500
B1101140.1180.764
Rnd B 11011110.0240.843
C33033330.000+2.482
Rnd C 33033270.000+2.559
n = 10
A12111110.000+0.858
Rnd A 124880.000+0.889
B21021210.000+1.388
Rnd B 21516130.000+1.373
C10255125132080.000+98.109
Rnd C 10254995262020.000+92.259
n = 50
A523814140.000+4.680
Rnd A 5251110.000+3.573
B1011190840.000+8.548
Rnd B 101100110.000+8.596
n = 100
A1029111110.000+8.938
Rnd A 102101110.000+9.142
B201421591490.000+22.089
Rnd B 201200110.000+19.968
n = 300
A302184118760.000+38.923
Rnd A 302301110.000+37.768
B6012693322860.000+83.617
Rnd B 601600110.000+76.893
n = 500
A502398104710.000+76.877
Rnd A 502501110.000+76.581
B10015954063320.000+169.198
Rnd B 10011000110.000+138.054
Rastrigin function. Consider the optimization problem
f ( x ) = 10 n + i = 1 n x i 2 10 cos ( 2 π x i ) min ,
x Π = { x R n : 5.12 x i 7.68 , i = 1 , , n } .
Global minimum x = ( 0 , , 0 ) , f ( x ) = 0 . Testing results are given in Table 7.
Let us make some comments on the results in Table 7. A uniform distribution of the starting points happened to be very inefficient: the best solution is very far from the optimum. Take, for example, the case n = 300 . Strategy A found 302 different local minima with 11 different objective function values. Checking the list of local minimum points shows that there are 78 different local minimum points, with the best value being 0.995. Therefore, strategy A shows that there are quite a number of different local minima with objective value close to the optimal one. Formally, the same can be said about strategies Rnd A and Rnd B . These random strategies also found a large number of different local minima; however, the objective function values are very far from the optimal value.
Schwefel function. Consider the optimization problem
f ( x ) = 418.9829 n i = 1 n x i sin ( | x i | ) min ,
x Π = { x R n : 500 x i 500 , i = 1 , , n } .
Global minimum x = ( 420.9687 , , 420.9687 ) , f ( x ) = 0 . Testing results are given in Table 8.
Table 7. Testing results for the Rastrigin function.
Table 7. Testing results for the Rastrigin function.
StrategyNumber of Starting PointsDuplicated SolutionsDifferent SolutionsDifferent Minimum ValuesRecord ValueGlobal Minimum (+/−)Time (s)
n = 5
A70770.9950.780
Rnd A 707718.9040.515
B1111040.9950.781
Rnd B 11011113.9790.765
C33132230.000+2.527
Rnd C 330332717.9092.480
n = 10
A1201290.9950.858
Rnd A 120121239.7980.890
B2121950.000+1.576
Rnd B 210212139.7981.638
C1025539721620.000+101.790
Rnd C 10250102568021.88996.971
n = 50
A5205290.000+3.354
Rnd A 5205252198.9923.604
B101168570.000+8.549
Rnd B 1010101101198.9928.347
n = 100
A1020102110.99512.620
Rnd A 1020102102397.98310.188
B2014315860.000+21.013
Rnd B 2010201200397.98320.124
n = 300
A3020302110.99537.378
Rnd A 30203023021193.94939.503
B6018751470.000+79.701
Rnd B 60106016011193.94974.911
n = 500
A5023499110.000+76.995
Rnd A 50205025011989.91536.331
B100115384270.000+171.975
Rnd B 10010100110001989.915168.044
Table 8. Testing results for the Schwefel function.
Table 8. Testing results for the Schwefel function.
StrategyNumber of Starting PointsDuplicated SolutionsDifferent SolutionsDifferent Minimum ValuesRecord ValueGlobal Minimum (+/−)Time (s)
n = 5
A7077929.3190.515
Rnd A 7077475.2700.531
B111107238.9150.764
Rnd B 1101111455.5330.749
C3313222118.4382.199
Rnd C 3303331455.5332.480
n = 10
A12012121562.5220.905
Rnd A 12012121383.3370.904
B2141770.000+1.388
Rnd B 21021211223.8981.591
C10251658601050.000+99.997
Rnd C 102551020872651.829100.121
n = 50
A52052522349.1183.900
Rnd A 52052528164.2794.040
B1012576230.000+8.502
Rnd B 10101011017859.295+7.878
n = 100
A1020102102296.1089.016
Rnd A 102010210216,993.9128.611
B20155146310.000+21.231
Rnd B 201020120116,948.09518.441
n = 300
A30203022945909.96133.119
Rnd A 302030230253,468.43732.479
B601215386390.000+68.984
Rnd B 601060160150,650.76669.732
n = 500
A5023502483214.51370.747
Rnd A 502050250289,847.49874.974
B1001328673430.000+168.498
Rnd B 100101001100089,104.515167.998
Again, pure random strategies show the worst results.
Levy function. Consider the optimization problem
f ( x ) = 10 sin 2 ( π x 1 ) + i = 1 n 1 ( x i 1 ) 2 ( 1 + 10 sin ( π x i + 1 ) ) 2 + ( x n 1 ) 2 min ,
x Π = { x R n : 10 x i 10 , i = 1 , , n } .
Global minimum x = ( 1 , , 1 ) , f ( x ) = 0 . Testing results are given in Table 9.
Table 9. Testing results for the Levy function.
Table 9. Testing results for the Levy function.
StrategyNumber of Starting PointsDuplicated SolutionsDifferent SolutionsDifferent Minimum ValueRecord ValueGlobal Minimum (+/−)Time (s)
n = 5
A70750.0010.546
Rnd A 70770.3370.515
B1111061.0640.858
Rnd B 11011101.0640.764
C33033280.00052.465
Rnd C 33033310.00042.446
n = 10
A1211141.0640.858
Rnd A 12012120.9370.983
B2112051.0641.575
Rnd B 21021190.00051.388
C1025010252520.000494.849
Rnd C 1025110245680.000494.817
n = 50
A5215170.00054.197
Rnd A 52052510.00053.728
B101110041.0648.502
Rnd B 1010101980.0018.377
n = 100
A102110140.00058.486
Rnd A 10201021010.00058.361
B201120051.06418.939
Rnd B 20102011990.00219.355
n = 300
A302130160.93733.665
Rnd A 30203023021.06440.778
B6011600121.06476.332
Rnd B 60106016011.06475.629
n = 500
A502150171.06474.210
Rnd A 50205025021.06494.257
B100111000200.0005168.590
Rnd B 10010100110001.064171.942
The Levy function was the most difficult testing case for all strategies. Not one of them could determine the global minimum. Nevertheless, strategies A and B are relatively efficient in high-dimensional cases.
The total testing showed that the most effective was strategy B, in terms of both finding the best solution and computational efforts. This effect can be explained in the following way: strategy B explores the total area of the feasible set more efficiently than the others.

7. Conclusions and Future Work

Sequentially most distant points techniques were suggested for determining good starting points in multistart strategies for problems of global optimization. Preliminary testing showed that the new strategies find good local minima very fast. The sequentially distant points can be obtained either by using an inscribed ellipsoid centered at the analytical center of the feasible set or by approximately solving auxiliary global optimization problems of special types.
Our future work will be devoted to an extension of the suggested techniques to solving global optimization problems with nonconvex feasible sets and to solving special highly nonlinear problems from practical applications.

Author Contributions

Conceptualization, O.K. and E.S.; software, O.K.; validation, O.K.; investigation, O.K.; methodology, E.S.; formal analysis, E.S. and V.N.; resources, V.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Available online: https://www.conopt.com/ (accessed on 17 December 2023).
  2. Brauchart, J.S.; Grabner, P.J. Distributing many points on spheres: Minimal energy and disigns. J. Complex. 2015, 31, 293–326. [Google Scholar] [CrossRef]
  3. Trikalinos, T.A.; van Valkenhoef, G. Efficient Sampling from Uniform Density n-polytopes; Technical Report; Brown University: Providence, RI, USA, 2014; 5p. [Google Scholar]
  4. Chen, Y.; Dwivedi, R.; Wainwright, M.J.; Yu, B. Fast MCMC Sampling Algorithms on Polytopes. J. Mach. Learn. Res. 2018, 19, 1–86. [Google Scholar]
  5. Diaconis, P. The Markov Chain Mont Carlo Revolution. Bull. AMS 2009, 46, 179–205. [Google Scholar] [CrossRef]
  6. Zhigljavsky, A.A. Theory of Global Random Search; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991; 341p. [Google Scholar]
  7. Polyak, B.; Shcherbakov, P. Why Does Monte Carlo Fail to Work Properly in High-Dimensional Optimization Problems? J. Optim. Theory Appl. 2017, 173, 612–627. [Google Scholar] [CrossRef]
  8. Ugray, Z.; Lasdon, L.; Plummer, J.C.; Glover, F.; Kelly, J.; Martí, R. A Multistart Scatter Search Heuristic for Smooth NLP and MINLP Problems. In Metaheuristic Optimization via Memory and Evolution; Sharda, R., Voß, S., Rego, C., Alidaee, B., Eds.; Operations Research/Computer Science Interfaces Series; Springer: Boston, MA, USA, 2005; Volume 30. [Google Scholar]
  9. Janáček, J.; Kvet, M.; Czimmermann, P. Kit of Uniformly Deployed Sets for p-Location Problems. Mathematics 2023, 11, 2418. [Google Scholar] [CrossRef]
  10. Dupin, N.; Nielsen, F.; Talbi, E.-G. Unified Polynomial Dynamic Programming Algorithms for P-Center Variants in a 2D Pareto Front. Mathematics 2021, 9, 453. [Google Scholar] [CrossRef]
  11. Sarhani, M.; Voß, S.; Jovanovic, R. Initialization of metaheuristics: Comprehensive review, critical analysis, and research directions. Int. Trans. Oper. Res. 2022, 30, 3361–3397. [Google Scholar] [CrossRef]
  12. Horst, R.; Tuy, H. Global Optimization: Deterministic Approaches. Springer: Berlin/Heidelberg, Germany, 1996; 730p. [Google Scholar]
  13. Jarre, F. Interior Point Methods for Class of Convex Programming. In Interior Points Methods in Mathematical Programming; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996; pp. 255–296. [Google Scholar]
  14. Available online: http://www.sfu.ca/~ssurjano/drop.html (accessed on 17 December 2023).
  15. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.S.; Esposito, W.R.; Gümxuxs, Z.H.; Harding, S.T.; Klepeis, J.L.; Meyer, C.A.; Schweiger, C.A. Handbook of Test Problems in Local and Global Optimization; Springer: Dordrecht, The Netherlands, 1999; 441p. [Google Scholar]
  16. Available online: https://www.scipopt.org/ (accessed on 17 December 2023).
  17. Available online: https://coin-or.github.io/Ipopt/ (accessed on 17 December 2023).
  18. Locatelli, M. A Note on the Griewank Test Function. J. Glob. Optim. 2003, 25, 169–174. [Google Scholar] [CrossRef]
Figure 1. Starting points w i in feasible domain and the inscribed ellipsoid in Example 1.
Figure 1. Starting points w i in feasible domain and the inscribed ellipsoid in Example 1.
Mathematics 12 00606 g001
Figure 2. Allocation of the starting points in Example 3.
Figure 2. Allocation of the starting points in Example 3.
Mathematics 12 00606 g002
Figure 3. Allocation of starting points in Example 4.
Figure 3. Allocation of starting points in Example 4.
Mathematics 12 00606 g003
Table 1. Starting and stationary points in Example 1.
Table 1. Starting and stationary points in Example 1.
i v i w i x , i f , i
1 ( 1 , 0 ) ( 0.631 , 2.233 ) ( 0.700 , 3.000 ) −1.000
2 ( 1 , 0 ) ( 1.333 , 2.016 ) ( 1.256 , 1.665 ) −0.656
3 ( 0 , 1 ) ( 0.607 , 0.912 ) ( 1.804 , 3.935 ) −0.656
4 ( 0 , 1 ) ( 1.356 , 3.337 ) ( 0.231 , 1.452 ) −0.605
5 ( 0 , 0 ) ( 0.982 , 2.125 ) ( 1.227 , 2.560 ) −0.885
6 ( 1 2 , 1 2 ) ( 0.469 , 1.344 ) ( 1.358 , 2.218 ) −0.793
7 ( 1 2 , 1 2 ) ( 1.495 , 2.905 ) ( 0.700 , 3.000 ) −1.000
8 ( 1 2 , 1 2 ) ( 0.965 , 1.190 ) ( 1.357 , 2.702 ) −0.885
9 ( 1 2 , 1 2 ) ( 0.998 , 3.058 ) ( 0.700 , 3.000 ) −1.000
Table 2. Points and distances in Example 3.
Table 2. Points and distances in Example 3.
i v 1 i v 2 i r 2 i v 1 i v 2 i r 2
10091.0310.1330.136
2204100.3440.1330.136
310.51.25111.3130.3440.122
4011120.6870.6560.122
50.3750.50.391131.6880.1560.122
61.37500.391140.3130.8440.122
70.86700.348150.7190.3280.109
800.6090.153160.0800.3050.099
Table 3. Points and distances in Example 4.
Table 3. Points and distances in Example 4.
i v 1 i v 2 i r 2 i v 1 i v 2 i r 2
10391.5352.0793.303
27450103.4650.9213.203
35020113.2734.2212.337
43.1545.92318.491124.4713.1081.748
53.2162.69310.436135.7793.5321.711
65.4842.2585.333144.7031.2451.637
74.7924.3915.029151.1783.2241.438
81.7454.2804.684164.2005.4001.368
Table 4. Initial and final distances for testing Problem (33).
Table 4. Initial and final distances for testing Problem (33).
nm Δ 12 δ T
5101043.004243.88729.125
10201931.523608.201116.189
20302972.2181272.148414.603
30453162.0461430.453461.576
40604074.3192166.210551.885
50754274.1072145.411630.431
Table 5. Comparison of two multistart strategies.
Table 5. Comparison of two multistart strategies.
Problem N L ( N G ) N L ( N G ) New N G New N L N L T ( N G T )
Bird7(2)7(2)007(2)
Branin10(1)11(1)0212(1)
Egg Crate15(1)13(0)0823(1)
Mishra12(1)11(0)0618(1)
Price6(0)17(1)11218(1)
Shubert13(3)18(3)11427(4)
Trefethen19(0)15(0)01231(0)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khamisov, O.; Semenkin, E.; Nelyub, V. Allocation of Starting Points in Global Optimization Problems. Mathematics 2024, 12, 606. https://doi.org/10.3390/math12040606

AMA Style

Khamisov O, Semenkin E, Nelyub V. Allocation of Starting Points in Global Optimization Problems. Mathematics. 2024; 12(4):606. https://doi.org/10.3390/math12040606

Chicago/Turabian Style

Khamisov, Oleg, Eugene Semenkin, and Vladimir Nelyub. 2024. "Allocation of Starting Points in Global Optimization Problems" Mathematics 12, no. 4: 606. https://doi.org/10.3390/math12040606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop