Next Article in Journal
Several Control Problems of a Class of Complex Nonlinear Systems Based on UDE
Next Article in Special Issue
Chaos Embed Marine Predator (CMPA) Algorithm for Feature Selection
Previous Article in Journal
An Improved Wild Horse Optimizer for Solving Optimization Problems
Previous Article in Special Issue
Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Guided Hybrid Modified Simulated Annealing Algorithm for Solving Constrained Global Optimization Problems

by
Khalid Abdulaziz Alnowibet
1,
Salem Mahdi
2,
Mahmoud El-Alem
3,
Mohamed Abdelawwad
4 and
Ali Wagdy Mohamed
5,6,*
1
Statistics and Operations Research Department, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
2
Educational Research and Development Center Sanaa, Sanaa 31220, Yemen
3
Department of Mathematics & Computer Science, Faculty of Science, Alexandria University, Alexandria 21544, Egypt
4
Institute for Computer Architecture and System Programming, University of Kassel, 34127 Kassel, Germany
5
Operations Research Department, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza 12613, Egypt
6
Department of Mathematics and Actuarial Science, School of Sciences Engineering, The American University in Cairo, Cairo 11835, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(8), 1312; https://doi.org/10.3390/math10081312
Submission received: 3 March 2022 / Revised: 5 April 2022 / Accepted: 11 April 2022 / Published: 14 April 2022
(This article belongs to the Special Issue Variational Problems and Applications)

Abstract

:
In this paper, a hybrid gradient simulated annealing algorithm is guided to solve the constrained optimization problem. In trying to solve constrained optimization problems using deterministic, stochastic optimization methods or hybridization between them, penalty function methods are the most popular approach due to their simplicity and ease of implementation. There are many approaches to handling the existence of the constraints in the constrained problem. The simulated-annealing algorithm (SA) is one of the most successful meta-heuristic strategies. On the other hand, the gradient method is the most inexpensive method among the deterministic methods. In previous literature, the hybrid gradient simulated annealing algorithm (GLMSA) has demonstrated efficiency and effectiveness to solve unconstrained optimization problems. In this paper, therefore, the GLMSA algorithm is generalized to solve the constrained optimization problems. Hence, a new approach penalty function is proposed to handle the existence of the constraints. The proposed approach penalty function is used to guide the hybrid gradient simulated annealing algorithm (GLMSA) to obtain a new algorithm (GHMSA) that finds the constrained optimization problem. The performance of the proposed algorithm is tested on several benchmark optimization test problems and some well-known engineering design problems with varying dimensions. Comprehensive comparisons against other methods in the literature are also presented. The results indicate that the proposed method is promising and competitive. The comparison results between the GHMSA and the other four state-Meta-heuristic algorithms indicate that the proposed GHMSA algorithm is competitive with, and in some cases superior to, other existing algorithms in terms of the quality, efficiency, convergence rate, and robustness of the final result.

1. Introduction

Optimization problems arise in different applications fields, such as technical sciences, industrial engineering, economics, networks, chemical engineering, etc. See for example [1,2,3,4,5]
In general, the constrained optimization problem can be formulated as follows:
min x R n f ( x ) , s . t g l ( x ) 0 , l = 1 , 2 , , q , h d ( x ) = 0 , d = 1 , 2 , , m , m < n a i x i b i , i = 1 , 2 , , n ,
where a i { R { } } , and b i { R { } } .
The functions f ( x ) , g l ( x ) , h j ( x ) : R n R are real valued functions, n denotes the number of variables in x, q is the number of inequality constraints, m is the number of equality constraints, a is a lower bounded on x and b is an upper bounded on x. The objective function f, the inequality constraints g l , l = 1 , 2 , , q , and the equality constraint h d , d = 1 , 2 , , m , are assumed to be continuously differentiable nonlinear functions.
Recently, there has been great development of optimization algorithms that are proposed to find global solutions to optimization problems. See for example [2,6,7,8].
The global optimization methods are used to prevent convergence to local optima and increase the probability of finding the global optimum [9].
The numerical global optimization algorithms can be classified into two classes: deterministic and stochastic methods. In stochastic methods, the minimization process depends partly on probability. In deterministic methods, in contrast, no probabilistic information is used [9].
So, for finding the global minimum of the unconstrained problem by using deterministic methods, it needs an exhaustive search over the feasible region of the function f and additional assumptions for the function f. On the contrary, to find the global minimum of the unconstrained problems, by using stochastic methods, one can prove the asymptotic convergence in probability, i.e., these methods are asymptotically successful with probability 1, see for example [10,11,12]. In general, the computational results of the stochastic methods are better than those of the deterministic methods [13].
Due to those reasons, a meta-heuristics strategy (stochastic method) is used to guide the search process [13]. Hence a meta-heuristic is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact or near-exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed [14,15,16].
The simulated-annealing algorithm (SA) is one of the most successful meta-heuristic strategies. In fact, the numerical results display that the simulated annealing technique is very efficient and effective for finding the global minimizer. See, for example, [2,5,17,18,19].
On the other hand, the gradient method is the most inexpensive method for finding a local minimizer of a continuously differentiable function. It has been proved that the gradient algorithm converges locally to a local minimizer [20]. Therefore, if a line-search (L) is added to the gradient method (G) as a globalization strategy, the resulting algorithm is globally convergent to a local minimizer (GL) [9,21,22].
Hence, when the simulated-annealing algorithm (SA) as a global optimization algorithm is combined with the line-search gradient method (GL) as a globally convergent method, the result is the hybrid gradient simulated annealing algorithm (GLMSA) [23]. The idea behind this hybridization is to gain the benefits and advantages of both the GL algorithm and the MSA algorithm.
As a matter of fact, the numerical results demonstrated that the (GLMSA) algorithm is a very efficient, effective and strong competitor for finding the global minimizer. For example, Table 4 of [23] shows that the GL algorithm is able to reach the optimum point of all test problems whose objective functions have only one minimum point (no local minima except the global one. i.e., convex function) and it is stuck at a local minimum for test problems whose objective functions have several local minima (with one global minimum, i.e., non-convex function). Table 6 of [23] demonstrates that the SMA modified simulated annealing algorithm finds the global minimum of all test problems from any starting point of the feasible search space S. However, the GLMSA hybrid gradient simulated annealing algorithm is faster than MSA; also, GLMSA is efficient and effective compared to other meta-heuristic algorithms.
All the above have motivated and encouraged us to generalize the GLMSA algorithm to solve Problem (1).
The literature review analysis shows that the handling constraint which is based on a penalty function is considered the most popular implemented mechanism; this is due to its simplicity and ease of implementation [24,25,26,27]. A penalty technique transforms Problem (1) into an unconstrained problem by adding the penalty term of each constraint violation to the objective function value. The remainder of this paper is organized as follows. The next section provides a brief description of the GLMSA algorithm. Constraint handling, the penalty function method, proposed penalty method and interior-point algorithm are presented in Section 3. A guided hybrid simulated annealing algorithm to solve constrained problems is presented in Section 4. Numerical results are given in Section 5. Section 6 contains some concluding remarks.
Note: Section Abbreviations provides a list of the abbreviations and symbols which are used in this paper.

2. Summarized Description of GLMSA Algorithm

The GLMSA algorithm has been designed for solving unconstrained optimization problems; in this paper the GLMSA algorithm is generalized to solve Problem (1). The GLMSA algorithm contains two approaches to find a new step at each iteration, the first one is the gradient method. In this approach, a candidate point is generated and it might be accepted or rejected. If the objective function f is decreased at this point, then it will be accepted, otherwise, the second approach will be used to generate another point.

2.1. The First Approach (Gradient Method)

The gradient method solves an unconstrained optimization problem iteratively, such that at each iteration, a step in the direction of the negative gradient is computed and added to the current point as follows. Given an initial guess x 0 R n , the gradient method generates a sequence { x k } , k 0 of the objective function of the unconstrained optimization problem such that:
x k + 1 = x k + d k ,
where d k is the first step, and it is defined by:
d k = | α k | g ( x k ) ,
where g ( x k ) the gradient vector of the function f at point x k and α k is a step length along the negative gradient direction ( g ( x k ) ). The step length α k along the g ( x k ) is defined by:
α k = f ( x k ) g ( x k ) 2 2 .
The G gradient algorithm is listed in Algorithm 1 of [23]. The step length λ k that is computed by the backtracking line-search approach is very important for global convergence of the gradient method. The following section presents a brief description of the backtracking line-search approach for globalizing the gradient method.

Globalizing the First Approach (Gradient Method)

To make the gradient method capable of finding a local minimizer x * of the objective function of the unconstrained optimization problem from any starting point x 0 , the G algorithm (gradient algorithm) is combined with the L algorithm (line-search algorithm) in order to obtain globally convergent algorithm G L . This algorithm is listed in Algorithm 1 below and it contains the first approach (gradient algorithm G) and the backtracking line-search algorithm L.
Algorithm 1 Line-Search Gradient Algorithm “GL”
Input:  f : R n R , f C 1 , γ ( 0 , 1 ) , k = 0 , a starting point x k R n and ε > 0 .
Output:  x * = x a c the local minimizer of f, f ( x * ) , the value of f at x *
 1: Set x a c = x 0 .             ▹ x a c is accepted solution.
 2: Compute f a c = f ( x a c ) , g a c = g ( x a c ) and d k .
 3: while g a c 2 > ε do g a c is the value of the gradient vector at the accepted point x a c .
 4:    Set k = k + 1 .
 5:     x k = x a c + d k      ▹ x a c is the accepted point form the previous iteration.
 6:    Compute f k = f ( x k )
 7:    Set λ = 1 .
 8:    while f k > f a c + γ λ g a c T d k do
 9:       Set λ = λ 2
10:      x k = x a c λ g a c         ▹ in this paper the value of γ is 10 4 .
11:     Compute f k = f ( x k )
12:    end while
13:    Set x a c x k and f a c f ( x k ) .
14:    Compute g a c = g ( x a c ) and d k .
15: end while
16: return x a c the local minimizer and its function value f a c
For more details about the gradient method and the backtracking line-search approach see [23]. The second approach of the GLMSA algorithm is presented in the following subsection.

2.2. The Second Approach (Simulated Annealing SA)

It must be noted that the modified simulated annealing algorithm in [23] contains three alternatives to generate a new point, but in this paper, the first alternative is considered to generate a new point. This procedure is very important for reducing the function evaluations from three times at each iteration to one function evaluation for every iteration, because we need to allow for more inner iterations when solving constrained optimization problems. This procedure guarantees that the parameters of the penalty function are increasing enough because it is a necessary condition for non-stationary penalty functions [28], i.e., when k , parameters must also go to infinity.
The second point is generated by
x k + 1 = x a c + ψ k ,
where x a c is the best point which is accepted so far and ψ k is the step of the second approach and computed by Algorithm 2 below.
Algorithm 2 The second approach to generate the step ψ k .
Step 1: Set k = 0 .
Step 2: Compute ω k = 10 ( 0.1 * k ) .
Step 3: Generate a random vector X k [ 1 , 1 ] n .
Step 4: Compute D k i = 1 + ( 1 + ω k ) | X k i | ω k , i = 1 , 2 , , n .    ▹n is the number of variables.
Step 5: Set D X k i = s i g n ( X k i ) .
Step 6: Compute D E k j = D k i D X i j .
Step 7: Compute ψ k i = b i D E k i .    ▹ b i is the upper bound of the feasible search space.
Step 8: k k + 1 .
Step 9: Repeat steps 2–8 until k = N .   ▹N is the number of iterations and it is given in advance.
The gradient line-search algorithm (GL) has been listed in Algorithm 1 and a modified simulated annealing algorithm (MSA) is illustrated by Algorithm 3.
Algorithm 3 Modified Simulated-Annealing “MSA”.
Input:  x a c , f a c , N and T.       ▹T control parameter (Temperature)
Output: x b e s t is the best point of N points and it value f b e s t
 1: for  k = 0 N  do
 2:     x k = x a c + ψ k , using Equation (5).
 3:    Compute Δ f = f ( x k ) f a c .
 4:    if  Δ f < 0  then
 5:       Set x a c k x k , f a c k f ( x k ) .
 6:    else
 7:       Generate a random number β ( 0 , 1 )
 8:       if  β < e Δ f T  then
 9:         Set x a c k x k , f a c k f ( x k ) .
10:       end if
11:    end if
12: end for
13: return x a c and its function value f a c .         ▹ f a c = f ( x a c ) .
where N is the maximum number of possible trials (Length Markov Chains of MSA) and T is the control parameter (temperature). For more details about the MSA algorithm, please, see [23].
For a detailed description of the simulated annealing algorithm SA see for example [18,29,30,31].
As we have mentioned above, Algorithm 1 (gradient line-search algorithm (GL)) is hybridized with Algorithm 3 (a modified simulated annealing algorithm (MSA)) to get the LGMSA algorithm that solves the unconstrained optimization problem.
In the next section, the LGMSA algorithm is guided to solve Problem (1) by using the penalty function method. There are many methods for handling the existence of the constraints in the constrained problem.

3. Constraints Handling

The algorithms which have been proposed to solve unconstrained optimization problems are unable to deal directly with constrained optimization problems. There are several approaches proposed to handle the existence of the constraints, see for example [27,32,33]. The most popular of them is the penalty function method.
The penalty function method is a successful technique for handling constraints [27,34,35].

3.1. Penalty Function Methods

The penalty methods have been most widely studied and used due to their simplicity in implementation. The major definition of the penalty function methods is the degree to which each constraint is penalized [28]. There are several types of penalty methods that are used to penalize the constraints in constrained optimization problems.
Three groups of penalty function methods are most popular; the first one is a group of methods of static penalties. In these methods, the penalty parameter does not depend on the current iteration, i.e., parameters remain constant through the evolutionary process [24,36].
The second one is a set of methods of dynamic penalties. In these methods the penalty parameters are usually dependent on the current iteration, in other words, the penalty parameters are functions in the iteration k, i.e., they are non-stationary. See [24,37,38].
The third is a set of methods of adaptive penalties; in this group penalty parameters are updated for every iteration [24].
The next section presents a suggested penalty function method with dynamic and adaptive parameters.

Proposed Penalty Function Method

This section shows how Problem (1) is transformed to an unconstrained optimization problem which is simple bounded as follows:
min x R n θ ( x , r ) = f ( x ) + r p ( x ) , s . t a i x i b i , i = 1 , 2 , , n ,
where f ( x ) is the original objective function in Problem (1), r is a penalty parameter. The penalty term p ( x ) is defined by:
p ( x ) = l = 1 q m a x { 0 , g l ( x ) } 2 + j = 1 m | h j ( x ) | 2 .
The difference between the penalty function methods is in the way of defining the penalty term and its parameter r [24].
The penalty function methods force infeasible points toward the feasible region by step-wise increasing the penalty; r is used in the penalizing function p ( x ) .
Therefore, the solution x * minimizes the objective function of Problem (6) and also minimizes the objective function of Problem (1), i.e., as long as k and r k , x * approaches the feasible region and r k p ( x ) 0 [28].
In this paper, the penalty function method has two parameters—the first one is r which penalizes the inequality constraint that is violated, i.e., when g l ( x ) > 0 . The second parameter is t which penalizes the equality constraint h j ( x ) whose value is not equal to zero.
Accordingly, the θ ( x , r ) function is defined by:
θ ( x , r ) = f ( x ) + r 2 p 1 ( x ) + t 2 p 2 ( x ) ,
where p 1 ( x ) = l = 1 q m a x { 0 , g l ( x ) } 2 , p 2 ( x ) = j = 1 m | h j ( x ) | 2 and r and t are the parameters for inequality and equality constraints respectively.
The parameters r and t are updated at each iteration k as follows.
r k + 1 = r k + φ k Φ k , t k + 1 = t k + 1 ,
where the parameter φ k is updated by:   
φ k = 0 if g l ( x ) 0 , 2 otherwise .
The parameter φ k is an adaptive parameter, i.e., when the candidate solutions are out of the feasible region then φ k penalizes a violated constraint by multiplying the term Φ k by 2, where r 0 = 1 is the initial value of r. The parameter Φ is updated as follows: Φ k + 1 = Φ k + 1 , t 0 = 1 .
Note: The equality constraint is more difficult than the inequality constraint because the size of the feasible region of the equality constraint is smaller than the size of the feasible region of the inequality constraint. For example, f ( x , y ) = x y s.t h ( x , y ) = x 2 + y 2 1 = 0 and f ( x , y ) = x y s.t g ( x , y ) = x 2 + y 2 1 < = 0 . The first problem is much harder than the second because in the first problem the size of the feasible region is the circumference of the circle while in the second problem, the feasible region is the whole disk. So, the parameter t ( k ) must be taken carefully.

3.2. Mechanism of Working of the Penalty Function Method

The penalty method solves the general Problem (1), during a succession of unconstrained optimization problems.
Let us discuss two examples in order to illustrate how the parameters of the penalty function are run.
The first example is very easy (one dimension); minimize f ( x ) = x 2 3 subject to g ( x ) = 0.5 0.5 x 0 , where S = [ 6 , 6 ] is the search domain.
If we want to find the optimal solution of the objective function f ( x ) = x 2 3 as an unconstrained problem, it is clear that the global solution to this problem is the point x * = 0 , such that f ( x * ) = 3 , for x R , but when we want to find the optimal solution of the objective function f ( x ) = x 2 3 subject to g ( x ) 0 , in this case, the problem is very difficult because we have to find the point x * that minimizes f ( x ) and at the same time it must satisfy the condition of the constraint g ( x ) 0 , which is why we need to apply the penalty function.
Hence, the problem f ( x ) = x 2 3 subject to g ( x ) = 0.5 0.5 x 0 is transformed into θ ( x , r ) = x + r 2 ( max { 0 , ( 1 2 0.5 x ) } 2 ) , if g ( x ) > 0 ; ( g ( x ) is violated), the first derivative is computed by the function θ ( x , r ) ; d θ ( x , r ) d x = 1 r 2 ( 1 2 0.5 x ) , then 1 r 2 ( 1 2 0.5 x ) = 0 ; x * = 1 4 r , when r = { 1 , 2 , 3 , , } , then x * = { 3 , 1 , 1 3 , , 1 } , f ( x * ) = { 6 , 2 , 26 9 , 2 } and g ( x * ) = { 2 , 1 , 2 3 , , 0 } , i.e., when r , x * 1 , g ( x * ) 0 , r p ( x * ) 0 , f ( x * ) 2 , and θ ( x * , r ) 2 .
Hence, the optimal point is x * = 1 , such that f ( x * ) = 2 and the constraint g ( x * ) = 0 is satisfied.
Figure 1 illustrates the behavior of the penalty functions; r p 1 ( x ) , r p 2 ( x ) and r p 3 ( x ) and the objective function f ( x ) of the original problem (constrained problem) and the objective function θ ( x , r ) of the transformed problem (unconstrained problem) for all x S = [ 6 , 6 ] .
Example 2: minimize x y s.t g ( x , y ) = x + 2 y 4 0 ; θ ( x , y , r ) = x y + r 2 ( max { 0 , ( x + 2 y 4 ) } 2 ) , if g ( x , y ) > 0 ; ( g ( x , y ) is violated), the gradient vector is computed by the function θ ( x , y ) ; g ( x , y ) = [ y + r ( x + 2 y 4 ) , x + 2 r ( x + 2 y 4 ) ] , hence, ( x * , y * ) = ( 2 1 1 4 r , 1 1 1 4 r ) , then ( x * , y * ) ( 2 , 1 ) as r ; this is why it must allow for the parameters r k and t k to increase as long as there exists a violated constraint, i.e., when a process of searching for a solution is an infeasible region.
To ensure that the process of searching for the optimal solution remains within the search domain, the interior-point algorithm is used. Therefore, the next section presents a brief description of this technique.

3.3. Interior-Point Method

The interior-point method is used in this paper, when a simple bounded exists in the test problem. Therefore, the interior point technique is used to ensure that the candidate solution lies inside a feasible region. This technique is used as follows at each iteration k, a damping parameter τ k is applied to insure that x k + 1 is feasible with respect to the limits a i x i b i , i = 1 , 2 , n and k = 1 , 2 , M as the inner loop of Algorithm 4, ref. [39].
Algorithm 4 Guided Hybrid Modified Simulated-Annealing Algorithm (GHMSA).
Input: f ( x ) , g l ( x ) and h d ( x ) : R n R , x 0 R n , M, T, T f , T o u t , ε , r 0 , Φ 0 and t 0 .
 1: set x a c = x 0   ▹ at the beginning we accept the initial point x 0 as an optimal solution.
 2: compute θ ( x a c ) = f ( x a c ) + r k 2 p 1 ( x a c ) + t k 2 p 2 ( x a c )           ▹ Using Formula (8).
 3: set θ b = θ ( x a c ) and θ δ = 1 . ▹ The values of θ b and θ δ = 1 are updated after M iterations.
 4: while ( T > T f and θ δ > ε ) or ( T > T o u t ) do         ▹ T o u t < T f 10 4 are as stopping criteria.
 5:     for  k = 0 to M do
 6:       compute θ ( x a c ) = f ( x a c ) + r k 2 p 1 ( x a c ) + t k 2 p 2 ( x a c ) .
 7:       set θ a c = θ ( x a c ) .
 8:       compute x 1 = x a c + d k .            ▹ d k is competed by (16).
 9:       go to Formula (8) to ensure that the point x 1 lies inside [ a , b ] n ▹ by Formula (14).
10:       compute Δ θ = θ ( x 1 ) θ a c
11:       if  Δ θ < 0  then
12:         go to Algorithm 1.
13:       else
14:         go to Formula (5) to generate other point.
15:       end if
16:     end for
17:     compute Φ k + 1 = Φ k + 1        ▹ here update penalty parameters.
18:      T = r T T         ▹ decrease temperature, where r T = 0.8 .
19:     compute θ δ = | θ b θ a c | and θ b θ a c .▹ θ δ is a stopping criterion when the solutions converge in the accumulation point for all iterations.
20: end while
21: Set x g x a c , θ g θ a c
22: return x g the global minimizer and the value of the objective function θ ( x g ) at x g .
The damping parameter τ k is defined to be:
τ k = min { 1 , min i { u k i , v k i } } ,
where
u k i = a i x k i Δ x k i if a i > a n d Δ x k i < 0 , 1 , otherwise ,
v k i = b i x k i Δ x k i if b i < a n d Δ x k i > 0 , 1 otherwise ,
where a i and b i are the lower and upper bounds of the domain of the problem respectively, i = 1 , 2 , n , n is the number of variables of function in problem, x k i is the component ith of variable x at iteration k and Δ x k denotes the steps which are obtained by either Formula (2) or by Formula (5).
Since the { x k } is always required to satisfy, for all k, a < x k < b , and then the point x k + 1 is computed by:
x k + 1 = x k + 0.99 τ k Δ x k ,
where the constant 0.99 is a damping parameter to ensure that x k is feasible with respect to the domain of function in the problem.

4. The Proposed Algorithm for Solving Constrained Optimization Problems (GHMSA)

According to the above procedures the GLMSA Algorithm is capable of solving Problem (1) as a constrained optimization problem during the solving of Problem (6) as an unconstrained optimization problem, hence there are some changes to the objective function θ ( x , r ) in Problem (6) to fit with the first step of the GLMSA Algorithm as follows.
  • the function f ( x ) is replaced by the function θ ( x , r ) defined in Equation (8), and then calculate
    α k = θ ( x a c ) g ( x a c , r k ) 2 2 ,
    where x a c is the accepted solution at iteration k,
    d k = | α k | g ( x a c , r k ) ,
    where the parameter r k might denote r only or t only or both together according to a type of constrained optimization problem, for example, if the constraints contain mixed constraints inequality and equality, then r k = ( r k , t k ) .
  • if the constrained problem contains simple bounded, we use Formula (14) to limit the new point inside this simple bounded.
In light of the above procedures, we rename the GLMSA Algorithm the “Guided Hybrid Modified Simulated-Annealing Algorithm” with the abbreviation “GHMSA”.

Setting Parameters of GHMSA Algorithm

The choice of a cooling schedule has an important impact on the performance of the simulated-annealing algorithm. The cooling schedule includes two terms: the initial value of the temperature T and the cooling coefficient r T which is used to reduce T. Many suggestions have been proposed in the literature for determining the initial value of the temperature T and the cooling coefficient r T , see for example [4,18,40,41,42].
In general, it is a unanimous fact that the initial temperature T must be sufficiently high (to ensure escape from local points) and r T ( 0.1 , 1 ) [7,43,44]. In this section, we suggest that the initial value of T be related to the number of variables and the value of f ( x ) at the starting point x 0 . The cooling coefficient is taken to be r T [ 0.8 , 1 ) to decrease the temperature T slowly.
Therefore, the parameters used in Algorithm 4 are presented as follows. M is the inner loop maximum number of iterations, T is the control parameter (Temperature), T o u t is a final value of T, r T is the cooling coefficient and T f is a final value of T if it is sufficiently small.
The setting of parameters is as follows: T = 10 4 , ε = 10 6 , T f = 10 14 , T o u t = 10 20 , r T = 0.8 , and M = 10 n .

5. Numerical Result

To test the effectiveness and efficiency of the proposed algorithm, the algorithm is run on some test problems. The test problems are divided into two sets. The first set of test problems are taken from [45]. They are 24 well-known constrained real-parameter optimization problems. The objective functions in these problems take different shapes and the number of variables is between 2 and 24. These test problems also contain four types of constraints as follows: (LI) denotes a linear inequality, (LE) is a linear equality, (NI) refers to a nonlinear inequality, and (NE) denotes a nonlinear equality. They are listed in Table 1, where f ( x * ) is the best known optimal function value and a denotes the active constraint number at the known optimal solution. “The information mentioned in Table 1 is taken from [46]”.
The GHMSA Algorithm solved 18 test problems out of the 24 because the other problems are either not continuous or not differentiable. The second set of test problems contains four known non-linear engineering design optimization problems. These test problems do not have known exact solutions.

5.1. Results of “GHMSA” Algorithm

The GHMSA algorithm is programmed using MATLAB version 8.5.0.197613 (R2015a) and it is run on a personal laptop and the machine epsilon about 1 × 10 16 .
The results of our algorithm are compared against the results of the CB-ABC Algorithm in [47], the CCiALF Algorithm in [48], the NDE Algorithm in [49] and the CAMDE Algorithm in [50].
Liang et al. [45] suggested that the achieved function error values of the obtained optimal solution x after 5 × 10 3 , 5 × 10 4 and 5 × 10 5 function evaluations (FES) are summarized in terms of {Best, Median, Worst, c, v ¯ ( v ¯ = p ( x ) q + m , p ( x ) is a penalty term in Equation (7)), Mean, s.d}.
The results are listed in Table 2, Table 3 and Table 4; where c is a concatenation of three numbers indicating the violated constraint number at the median solution by more than 1.0, between 0.01 and 1.0, and between 0.0001 and 0.1, respectively. v ¯ is the mean value of the violations of all constraints at the median solution. The numbers in the parenthesis after the error value of the Best, Median, Worst solution are the constraint numbers not satisfying the feasible condition of the Best, Median, and Worst solutions, respectively. Table 2, Table 3 and Table 4 denote that the GHMSA can determine feasible solutions at each run utilizing 5 × 10 3 FES for 12 test problems {G01, G03, G04, G06, G08, G09, G10, G12, G13, G16, G18, G24}. As for problems G11, G14 and G15, the GHMSA Algorithm finds feasible solutions by using 5 × 10 4 FES. For the other three test problems, {G05, G07, G19}, the GHMSA Algorithm is able to reach feasible solutions by using 5 × 10 5 FES.
Assume that if the result x is a feasible one satisfying ( f ( x ) f ( x * ) 0.0001, then x is in a neighborhood (near-optimal) of the optimal point x * = x g . Table 2, Table 3 and Table 4 indicate that the GHMSA Algorithm can get near-optimal points for six problems, { G01, G04, G06, G08, G12, G24,} by using only 5 × 10 3 FES, { G03, G11, G13, G14, G15, G16,G18} by using only 5 × 10 4 FES and { G07, G09, G13, G19} by using only 5 × 10 5 FES. However, the GHMSA Algorithm failed to satisfy ( f ( x ) f ( x * ) 0.0001, for two problems {G05, G10}. As suggested by [45], Table 5 presents the Best, Median, Worst, Mean, and s.d values of successful run, feasible rate, success rate, and success performance over 40 runs. Let us define the following:
Feasible run: A run through which at least one feasible solution is found in Max FES.
Successful run: A run during which the algorithm finds a feasible solution x satisfying ( f ( x ) f ( x * ) 0.0001.
Feasible rate (f.r) = (# of feasible runs)/ total runs.
Successrate ( s . r ) = (# of successful runs) / total runs.
Successperformance ( s . p ) = mean (FES for successful runs) × (# of total runs)/(# of successful runs).
Table 5 shows that the GHMSA Algorithm obtains a 100% feasible rate and success rate for all 18 problems with the exception of problems G05 and G10.
For achieving the success condition during the view of success performance in Table 5, the GHMSA Algorithm needs:
(1)
117 FES 4.924 × 10 3 for 5 problems i.e., {G01, G04, G08, G12, G24}.
(2)
3628 FES 1.4 × 10 4 for 4 problems i.e., {G03, G06, G11, G16}.
(3)
9258 FES ≤ 90,720 for 4 problems i.e., {G13, G14, G15, G18}.
(4)
34,773 ≤ FES ≤ 500,559 for 3 problems i.e., {G07, G09, G19}.
The GHMSA Algorithm failed to achieve the success condition for two problems, i.e., {G05, G10}. More information about the performance of the GHMSA Algorithm for solving these problems is given in Figure 2, Figure 3 and Figure 4. We have plotted the relationship between l o g 10 ( f ( x ) f ( x * ) ) and FES for showing the convergence of the GHMSA at the median run over 40 independent runs. So the convergence graphs of these problems in Figure 2, Figure 3 and Figure 4 show that the error values decrease dramatically with increasing FES for all test problems.

5.2. Performance of GHMSA Algorithm Using Statistical Hypothesis Testing

In this section, we use statistical hypothesis testing to evaluate the efficiency of the GHMSA Algorithm versus the efficiency of the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms.
A statistical hypothesis is a surmise about a population parameter. This expectation might be true or false. The null hypothesis is denoted by H 0 , and it is a statistical hypothesis that announces that there is no difference between a parameter and a specific value or that there is no difference between two parameters. The alternative hypothesis is indicated by H a , and it is a statistical hypothesis that declares a specific difference between a parameter and a specific value or states that there is a difference between two parameters. Hypothesis testing is a form of inferential statistic which authorizes us to draw conclusions on a whole population based on a representative sample [51]. Parametric tests can provide trustworthy results with distributions that are skewed and non normal. Parametric analysis can produce reliable results even if the continuous data are non normally distributed. We just have to be sure that the sample size is greater than 30. A one sample t-test is one of the parametric tests that is used to compare the mean (Average) of a sample with a mean of the population. The important conditions for using the one-sample t-test are independence and normality (or sample size > 30 ). In our study the sample size is 50, i.e., the number of runs is 50 randomly (from any starting point) run; this criterion is suggested by [45]. The significance level in this study is 95%, i.e., α = 0.05 . Our hypotheses are formulated in the following:
H 0 : the mean (average) of the results of the GHMSA Algorithm and the mean (average) of the results of other algorithms are equal.
H a : the mean (average) of the results of the GHMSA Algorithm and the mean of the results of other algorithms are different.
The above hypotheses can be formulated in Equation (17).
H 0 : M e G H M S A = M e A l g o r i t h m l , H a : M e G H M S A M e A l g o r i t h m l ,
where l denotes one of the algorithms, CB-ABC, CCiALF, NDE and CAMDE, and M e denotes the average results of the algorithms.
In order to compare the performance of the GHMSA Algorithm with the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms, the t-test with a significance level of α = 0.05 is performed. To perform the t-test, the hypotheses in Equation (17) are considered.
Statistical processes are performed by using the SPSS Program. Rejecting or accepting H 0 is based on the value of the p-value (Sig. (2-tailed)) according to Column 1 of Table 6. While the performance of the algorithm based on the value of the t-test is in Column 3 of Table 6. So, Column 4 of Table 6 takes three values according to the probabilities in (18).
Decision = 1 then M e G H M S A < M e A l g o r i t h m a l , 1 then M e G H M S A > M e A l g o r i t h m l , 0 then M e G H M S A = M e A l g o r i t h m l .
The results of the GHMSA are compared to the results of the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms. The statistical hypotheses in Equation (17) are tested by using the t-test. Table 7, Table 8, Table 9 and Table 10 present these results.
The results of the GHMSA are compared versus the four meta-heuristic algorithms in the literature. The results of statistical tests are presented in Table 7, Table 9 and Table 10. In Table 7, Column 1 presents the abbreviation of the test problems denoted by pr. Column 2 presents the results of the s.t which include {b.s, mean, s.d, Decision }, where Decision denotes wins, losses and draws of the GHMSA compared with the other algorithms. Columns 3–7 give the results of the five algorithms. Table 9 and Table 10 are similar to Table 7.
After executing the pairwise t-test for all algorithms, if the GHMSA Algorithm is superior, inferior or equal to the compared algorithm denoted by a l g o r i t h m l , then the decision is set to 1, –1 and 0 respectively, as we have shown in Table 6. The left of Figure 5 summarizes the results that are presented in Table 7, Table 8, Table 9 and Table 10 regarding the decision. The left of Figure 5 shows that the GHMSA Algorithm was superior at {7, 6, 9, 5} problems, equal at {6, 6, 3, 5} problems and inferior at {4, 5, 5, 7} problems compared to the CB-ABC, the CALF, the NDE and the CAMDE Algorithms, respectively. However, the GHMSA is inferior at seven problems compared to the CAMDE, but the GHMSA needs 1,590,905 as a total FES versus the CAMDE needing 4,320,000, as shown in Figure 6. To gain the success condition from the point of view of successful execution, the GHMSA needs less than 5 × 10 3 FES for five problems, i.e., {G01, G04, G08, G12, G24} versus the CAMDE needing at least 5 × 10 3 FES for two problems, i.e., {G08, G12}. We can say that the percentage of superior, equal and inferior of the GHMSA are 40%, 30%, 30% respectively.
For the four engineering problems, we give a brief description. The pressure vessel problem is a practical problem that is often used as a benchmark problem for testing optimization algorithms [52]. The left of Figure 7 shows the structure of this issue, where a cylindrical pressure vessel is capped at both ends by hemispherical heads. The aim of the problem is to find the minimum total cost of fabrication, including costs from a combination of welding, material and forming. The thickness of the cylindrical skin, x 1 ( T s ) , thickness of the spherical head, x 2 ( T h ) , the inner radius, x 3 ( R ) , and the length of the cylindrical segment of the vessel, x 4 ( L ) , were included as the optimization design variables of the problem. The GHMSA Algorithm obtains these results: x G H M S A = {0.778168641375105, 0.384649162627902, 40.3196187240987, 200}, i.e., f ( x G H M S A ) = 5885.332774, c = {0, – 3.8858 × 10 16 , 1.1642 × 10 97 , −40}, i.e., v ¯ = 0; the left of Figure 8 shows a convergence graph of the GHMSA to the best solution for this problem.
Another well-known engineering optimization task is the design of a tension (compression spring) for a minimum weight. This problem has been studied by several authors. For example, [52]. The right of Figure 7 shows a tension (compression spring) with three design variables. It needs to minimize the weight of a tension (compression string) subject to constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the wire diameter, d ( x 1 ) , the mean coil diameter, D ( x 2 ) , and the number of active coils, P ( x 3 ) . The GHMSA obtains these results: x G H M S A = {0.0516890825110813, 0.356718255308635, 11.2889355307237}, i.e., f ( x G H M S A ) = 0.01266523279, c = { 1.55 × 10 10 , 4.44 × 10 16 , –4.05379, –0.72773}, i.e., v ¯ = 1.11 × 10 16 . The convergence graph for Engp2 is presented on the right of Figure 8. The welded beam design optimization problem has been solved by many researchers [52]. The left of Figure 9 shows the welded beam structure which consists of a beam A and the weld required to hold it to member B. The goal of this problem is to minimize the overall cost of fabrication, subject to some constraints. This problem has four design variables— x 1 , x 2 , x 3 and x 4 —with constraints of shear stress τ , bending stress in the beam σ , buckling load on the bar P c , and end deflection on the beam δ . The GHMSA obtains these results: x G H M S A = { 0.205729642092758 , 3.4704886133955 , 9.03662391715327 , 0.205729639752274 } , i.e., f ( x G H M S A ) = 1.7248523060 , c = {– 9.03 × 10 08 , – 4.02 × 10 05 , 2.34 × 10 09 , –3.43298, –0.08073, –0.23554, – 8.73 × 10 09 }, i.e., v ¯ = 3.3429 × 10 10 . The convergence graph for Engp3 is presented by the left of Figure 10.
The speed reducer design problem is one of the benchmark structural engineering problems [52]. It has seven design variables as described in the right of Figure 9, with the face width x 1 , module of teeth x 2 , number of teeth on pinion x 3 , length of the first shaft between bearings x 4 , length of the second shaft between bearings x 5 , diameter of the first shaft x 6 , and diameter of the first shaft x 7 . The aim of this problem is to minimize the total weight of the decelerator. The GHMSA obtains these results: x G H M S A = { 3.499999999 , 0.7 , 17 , 7.3 , 7.715319913 , 3.350214666 , 5.286654465 } , i.e, f ( x G H M S A ) = 2994.471066, c = { 0.073915 , 0.198 , 0.49917 , 0.90464 , 8.6365 × 10 11 , 1.0931 × 10 11 , 0.7025 , 2.86 × 10 10 , 0.58333 , 0.051326 , 1 . 944210 10 } , i.e, v ¯ = 2.6 × 10 11 . The convergence graph for Engp4 is presented by the right of Figure 10. The four engineering problems are used to compare the performance of the GHMSA against the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms. Statistical hypotheses in Equation (17) are used to compare the mean of the GHMSA with means of the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms. Rows 22–41 of Table 10 present the statistical comparisons of the GHMSA versus the four Algorithms for engineering problems Enp1 to Enp4. The right of Figure 5 gives the number of “wins-draws-losses” of the GHMSA compared with the CB-ABC, the CCiALF, the NDE and the CAMDE for Enp1 to Enp4. Figure 11 shows the convergence graph of standard deviation for problems Enp1, Enp2, Enp3 and Enp4 for the five algorithms. The relation between the four engineering problems {Enp1, Enp2, Enp3 and Enp4} and their values for l o g 10 ( s . d ) are plotted. From the right of Figure 5 and Figure 11, it can be said that the performance of the GHMSA algorithm is better than the other algorithms for problems Enp1 to Enp4, for the following reasons:
(1) The GHMSA obtains a minimum value of objective function (5885.332774) for engineering problem Enp1 (pressure vessel), the point minimum x * is feasible; many of the algorithms obtained a value of objective function equal to or greater than 6059.71. see for example [48,49,50,52,53,54,55,56,57,58].
In addition to that, if 10 x 4 ( L ) < , then f ( x * ) = 5804.37621675626 , otherwise if 10 x 4 ( L ) < 208 , then f ( x * ) = 5866.99226593889 , where L is shown in the left of Figure 8.
(2) The right of Figure 5 shows that the GHMSA Algorithm does not fall at any problem versus the other algorithms.
(3) The GHMSA is superior at {2, 1, 4, 2} problems versus the CB-ABC Algorithm, the CCiALF Algorithm, the NDE and the CAMDE Algorithm, respectively.
(4) The GHMSA is equal at {2, 3, 0, 2} problems versus the CB-ABC Algorithm, the CCiALF Algorithm, the NDE and the CAMDE Algorithm, respectively.
(5) Figure 11 shows that the GHMSA Algorithm converges to zero for the standard deviation (s.d). See the green color.

6. Conclusions and Future Work

The unconstrained nonlinear optimization algorithms have been guided to find the global minimizer of the constrained optimization problem. A result, Algorithm “GHMSA”, has been proposed for finding the global minimizer of the non-linear constrained optimization problem. Algorithm “GHMSA” contains a new technique that is applied to convert the constrained optimization problem into the unconstrained optimization problem. The results of the algorithm demonstrate that the proposed penalty function is a good technique to make the unconstrained algorithm able to deal with the constrained optimization problem. The interior-point algorithm keeps the candidate solutions inside the domain search. The results of some nonlinear constrained optimization problems and four non-linear engineering optimization problems show that the GHMSA algorithm has superiority over the other four algorithms in some test problems. For the future work, the proposed algorithm can be enhanced and modified to solve the multi-objective function, and the convergence analysis of the modified simulated annealing algorithm will be performed.
Moreover, it will be considered in future work to propose a new free derivative to approximate the gradient vector that will be combined (hybridized) with a new simulated annealing algorithm to solve unconstrained optimization, constrained, or multi-objective optimization problems. Convergence analysis of the GMLSA and GHMAS algorithms will be considered in future work.

Author Contributions

M.E.-A.; Formal analysis, M.A.; Funding acquisition, K.A.A.; Investigation, M.A.; Methodology, A.W.M.; Project administration, K.A.A.; Resources, K.A.A.; Supervision, A.W.M.; Validation, M.E.-A.; Writing—original draft, S.M.; Writing—review & editing, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

The Research is funded by Researchers Supporting Program at King Saud University, (Project# RSP-2021/305).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors present their appreciation to King Saud University for funding the publication of this research through Researchers Supporting Program (RSP-2021/305), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CB-ABCCrossover-Based Artificial Bee Colony Algorithm
CCiALFCooperative Coevolutionary Differential Evolution Algorithm
NDEA novel Differential Evolution Algorithm
CAMDEAdaptive Differential Evolution with Multi-Population-based
Mutation Operators for Constrained Optimization
GLMSAGradient Line-Search Modified Simulated-Annealing Algorithm
GHMSAGuided Hybrid Gradient Modified Simulated-Annealing Algorithm
Symbols
TControl Parameter (Temperature)
kNumber Iteration
nNumber of Variables
V [ 1 , 1 ] n A random Vector of n Dimension in Interval [ 1 , 1 ]
x 0 Starting Point
x 1 A point Computed by GHMSA Algorithm
x 2 A point Computed by GHMSA Algorithm
x a c the Best Point Accepted by Our Algorithm at Iteration k
θ a c Function Value at Point x a c
θ 1 Function Value at Point x 1
θ 2 Function Value at Point x 2
f the Difference Between the Value f a c and f 1
Mthe Inner Loop Maximum Number of Iterations
ψ the Step Size which is Generated by First Approach in “EMSA” Algorithm
dthe Step Size which is Generated by GHMSA Algorithm
β A random Number in ( 0 , 1 )
r T the Cooling Coefficient
T f A final Value of T it is Sufficiently Small
T o u t A final Value of T; T o u t < T f 2
ε A parameter has Small Value Used as A stopping Criterion
#prNumber of Test Problems
x g Global Minimizer Found by GHMSA Algorithm
θ ( x g ) Function Value at Global Minimum
g ( x ) the gradient vector
g ( x g ) 2 Norm of the gradient vector of θ at x g
p ( x ) Penalty Term
g i ( x ) Inequality Constraint
h j ( x ) Equality Constraint
qA number of the Inequality Constraints
mA number of the Equality Constraints
rPenalty Parameter for the Inequality Constraints
tPenalty Parameter for the Equality Constraints
UUpper Feasible Region (Domain Search)
LLower Feasible Region (Domain Search)
b . s the Best Solution Found by the Algorithm
w . s the Worst Solution Found by the Algorithm
s . d the Standard Deviation
w . b                                         Absolute Value Between the Worst Solution and the Best Denoted by | w o r s t b e s t |
e r Absolute Value Between the Best Solution and the Exact Denoted by | b e s t e x a c t |
e . c Error Constraint where e . c = max { 0 , g i ( x ) } + max { 0 , h j ( x ) }
e : v 1 the Average of { s . d , w . b , e r }
e : v 2 the Average of { s . d , w . b , e . c }
F E S Function Evaluation
cA sequence of 3 Numbers Denoting the Violated Constraint Number at the Median solution
v ¯ Is the Mean Value of the Violations of All Constraints at the Median Solution
H 0 the Null Hypothesis
H a the Alternative Hypothesis
M e the Average Results

References

  1. Abdel-Baset, M.; Hezam, I. A Hybrid Flower Pollination Algorithm for Engineering Optimization Problems. Int. J. Comput. Appl. 2016, 140, 12. [Google Scholar] [CrossRef]
  2. Ayumi, V.; Rere, L.; Fanany, M.I.; Arymurthy, A.M. Optimization of Convolutional Neural Network using Microcanonical Annealing Algorithm. arXiv 2016, arXiv:1610.02306. [Google Scholar]
  3. Rere, L.; Fanany, M.I.; Arymurthy, A.M. Metaheuristic Algorithms for Convolution Neural Network. Comput. Intell. Neurosci. 2016, 2016, 1537325. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Rere, L.R.; Fanany, M.I.; Murni, A. Application of metaheuristic algorithms for optimal smartphone-photo enhancement. In Proceedings of the 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE), Tokyo, Japan, 7–10 October 2014; pp. 542–546. [Google Scholar]
  5. Samora, I.; Franca, M.J.; Schleiss, A.J.; Ramos, H.M. Simulated annealing in optimization of energy production in a water supply network. Water Resour. Manag. 2016, 30, 1533–1547. [Google Scholar] [CrossRef]
  6. Agrawal, P.; Ganesh, T.; Mohamed, A.W. A novel binary gaining–sharing knowledge-based optimization algorithm for feature selection. Neural Comput. Appl. 2021, 33, 5989–6008. [Google Scholar] [CrossRef]
  7. Certa, A.; Lupo, T.; Passannanti, G. A New Innovative Cooling Law for Simulated Annealing Algorithms. Am. J. Appl. Sci. 2015, 12, 370. [Google Scholar] [CrossRef] [Green Version]
  8. Mohamed, A.A.; Kamel, S.; Hassan, M.H.; Mosaad, M.I.; Aljohani, M. Optimal Power Flow Analysis Based on Hybrid Gradient-Based Optimizer with Moth–Flame Optimization Algorithm Considering Optimal Placement and Sizing of FACTS/Wind Power. Mathematics 2022, 10, 361. [Google Scholar] [CrossRef]
  9. Nocedal, J.; Wright, S. Numerical Optimization; Springer Science & Business Media: Cham, Switzerland, 2006. [Google Scholar]
  10. Aarts, E.; Korst, J. Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing; John Wiley & Sons, Inc.: New York, NY, USA, 1989. [Google Scholar]
  11. Hillier, F.S.; Price, C.C. International Series in Operations Research & Management Science; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  12. Laarhoven, P.J.V.; Aarts, E.H. Simulated Annealing: Theory and Applications; Springer-Science + Business Media, B. V.: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
  13. Kan, A.R.; Timmer, G. Stochastic methods for global optimization. Am. J. Math. Manag. Sci. 1984, 4, 7–40. [Google Scholar] [CrossRef]
  14. Ali, M. Some Modified Stochastic Global Optimization Algorithms with Applications. Ph.D. Thesis, Loughborough University, Loughborough, UK, 1994. [Google Scholar]
  15. Blum, C.; Roli, A. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  16. Desale, S.; Rasool, A.; Andhale, S.; Rane, P. Heuristic and meta-heuristic algorithms and their relevance to the real world: A survey. Int. J. Comput. Eng. Res. Trends 2015, 2, 296–304. [Google Scholar]
  17. Chakraborti, S.; Sanyal, S. An Elitist Simulated Annealing Algorithm for Solving Multi Objective Optimization Problems in Internet of Things Design. Int. J. Adv. Netw. Appl. 2015, 7, 2784. [Google Scholar]
  18. Gonzales, G.V.; dos Santos, E.D.; Emmendorfer, L.R.; Isoldi, L.A.; Rocha, L.A.O.; Estrada, E.d.S.D. A Comparative Study of Simulated Annealing with different Cooling Schedules for Geometric Optimization of a Heat Transfer Problem According to Constructal Design. Sci. Plena 2015, 11. [Google Scholar] [CrossRef] [Green Version]
  19. Poorjafari, V.; Yue, W.L.; Holyoak, N. A Comparison between Genetic Algorithms and Simulated Annealing for Minimizing Transfer Waiting Time in Transit Systems. Int. J. Eng. Technol. 2016, 8, 216. [Google Scholar] [CrossRef] [Green Version]
  20. Armijo, L. Minimization of functions having lipschitz continuous first-partial derivatives. Pac. J. Math. 1966, 16, 187–192. [Google Scholar] [CrossRef] [Green Version]
  21. Bertsekas, D.P. Nonlinear Programming; Athena Scientific Belmont: Belmont, MA, USA, 1999. [Google Scholar]
  22. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1996; Volume 16. [Google Scholar]
  23. EL-Alem, M.; Aboutahoun, A.; Mahdi, S. Hybrid gradient simulated annealing algorithm for finding the global optimal of a nonlinear unconstrained optimization problem. Soft Comput. 2020, 25, 2325–2350. [Google Scholar] [CrossRef]
  24. Ali, M.; Golalikhani, M.; Zhuang, J. A computational study on different penalty approaches for solving constrained global optimization problems with the electromagnetism-like method. Optimization 2014, 63, 403–419. [Google Scholar] [CrossRef]
  25. Datta, R.; Deb, K. An adaptive normalization based constrained handling methodology with hybrid bi-objective and penalty function approach. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  26. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  27. Jordehi, A.R. A review on constraint handling strategies in particle swarm optimization. Neural Comput. Appl. 2015, 26, 1265–1275. [Google Scholar] [CrossRef]
  28. Joines, J.A.; Houck, C.R. On the Use of Non-Stationary Penalty Functions to Solve Nonlinear Constrained Optimization Problems with GA’s. In Proceedings of the International Conference on Evolutionary Computation, Orlando, FL, USA, 27–29 June 1994; pp. 579–584. [Google Scholar]
  29. Dekkers, A.; Aarts, E. Global Optimization and simulated-annealing algorithm. Math. Program. 1991, 50, 367–393. [Google Scholar] [CrossRef] [Green Version]
  30. Ingber, L. Simulated Annealing: Practice versus Theory. Mathl. Comput. Model. 1993, 18, 29–57. [Google Scholar] [CrossRef] [Green Version]
  31. Vidal, R. Applied Simulated Annealing (Lecture Notes in Economics and Mathematical Systems). In Applied Simulated Annealing: Lecture Notes in Economics and Mathematical Systems; Springer: Cham, Switzerland, 1993. [Google Scholar]
  32. Kazarlis, S.; Petridis, V. Varying fitness functions in genetic algorithms: Studying the rate of increase of the dynamic penalty terms. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Leiden, The Netherlands, 5–9 September 2020; Springer: Berlin/Heidelberg, Germany, 1998; pp. 211–220. [Google Scholar]
  33. Michalewicz, Z.; Janikow, C.Z. Handling Constraints in Genetic Algorithms. In Proceedings of the 4th International Conference on Genetic Algorithms, San Diego, CA, USA, 13–16 July 1991; pp. 151–157. [Google Scholar]
  34. Michalewicz, Z. A Survey of Constraint Handling Techniques in Evolutionary Computation Methods. Evol. Program. 1995, 4, 135–155. [Google Scholar]
  35. Michalewicz, Z.; Schoenauer, M. Evolutionary algorithms for constrained parameter optimization problems. Evol. Comput. 1996, 4, 1–32. [Google Scholar] [CrossRef]
  36. Homaifar, A.; Qi, C.X.; Lai, S.H. Constrained optimization via genetic algorithms. Simulation 1994, 62, 242–253. [Google Scholar] [CrossRef]
  37. Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimization method for constrained optimization problems. Intell.-Technol.-Theory Appl. New Trends Intell. Technol. 2002, 76, 214–220. [Google Scholar]
  38. Petalas, Y.G.; Parsopoulos, K.E.; Vrahatis, M.N. Memetic particle swarm optimization. Ann. Oper. Res. 2007, 156, 99–127. [Google Scholar] [CrossRef]
  39. El-Alem, M.; El-Sayed, S.; El-Sobky, B. Local convergence of the interior-point Newton method for general nonlinear programming. J. Optim. Theory Appl. 2004, 120, 487–502. [Google Scholar] [CrossRef]
  40. Ali, M.M.; Gabere, M. A simulated annealing driven multi-start algorithm for bound constrained global optimization. J. Comput. Appl. Math. 2010, 233, 2661–2674. [Google Scholar] [CrossRef] [Green Version]
  41. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  42. Yarmohamadi, H.; Mirhosseini, S.H. A New Dynamic Simulated Annealing Algorithm for Global Optimization. J. Math. Comput. Sci. 2015, 14, 16–23. [Google Scholar] [CrossRef]
  43. Corona, A.; Marchesi, M.; Martini, C.; Ridella, S. Minimizing multimodal functions of continuous variables with the simulated annealing algorithm. ACM Trans. Math. Softw. 1987, 13, 262–280. [Google Scholar] [CrossRef]
  44. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computer machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef] [Green Version]
  45. Liang, J.; Runarsson, T.P.; Mezura-Montes, E.; Clerc, M.; Suganthan, P.N.; Coello, C.C.; Deb, K. Problem definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter optimization. J. Appl. Mech. 2006, 41, 8–31. [Google Scholar]
  46. Ma, H.; Simon, D. Blended biogeography-based optimization for constrained optimization. Eng. Appl. Artif. Intell. 2011, 24, 517–525. [Google Scholar] [CrossRef] [Green Version]
  47. Brajevic, I. Crossover-based artificial bee colony algorithm for constrained optimization problems. Neural Comput. Appl. 2015, 26, 1587–1601. [Google Scholar] [CrossRef]
  48. Ghasemishabankareh, B.; Li, X.; Ozlen, M. Cooperative coevolutionary differential evolution with improved augmented Lagrangian to solve constrained optimisation problems. Inf. Sci. 2016, 369, 441–456. [Google Scholar] [CrossRef]
  49. Mohamed, A.W. A novel differential evolution algorithm for solving constrained engineering optimization problems. J. Intell. Manuf. 2018, 29, 659–692. [Google Scholar] [CrossRef]
  50. Xu, B.; Tao, L.; Chen, X.; Cheng, W. Adaptive differential evolution with multi-population-based mutation operators for constrained optimization. Soft Comput. 2019, 23, 3423–3447. [Google Scholar] [CrossRef]
  51. Sheskin, D.J. Handbook of Parametric and Nonparametric Statistical Procedures; CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar]
  52. Long, W.; Liang, X.; Cai, S.; Jiao, J.; Zhang, W. A modified augmented Lagrangian with improved grey wolf optimization to constrained optimization problems. Neural Comput. Appl. 2016, 28, 421–438. [Google Scholar] [CrossRef]
  53. Lobato, F.S.; Steffen, V., Jr. Fish swarm optimization algorithm applied to engineering system design. Lat. Am. J. Solids Struct. 2014, 11, 143–156. [Google Scholar] [CrossRef] [Green Version]
  54. Mazhoud, I.; Hadj-Hamou, K.; Bigeon, J.; Joyeux, P. Particle swarm optimization for solving engineering problems: A new constraint-handling mechanism. Eng. Appl. Artif. Intell. 2013, 26, 1263–1273. [Google Scholar] [CrossRef]
  55. Mohamed, A.W.; Sabry, H.Z. Constrained optimization based on modified differential evolution algorithm. Inf. Sci. 2012, 194, 171–208. [Google Scholar] [CrossRef]
  56. Rocha, A.M.A.; Fernandes, E.M.d.G. Self-Adaptive Penalties in the Electromagnetism-like Algorithm for Constrained Global Optimization Problems. In Proceedings of the 8th World Congress on Structural and Multidisciplinary Optimization, Lisbon, Portugal, 1–5 June 2009. [Google Scholar]
  57. Yang, X.S.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef] [Green Version]
  58. Zhang, C.; Li, X.; Gao, L.; Wu, Q. An improved electromagnetism-like mechanism algorithm for constrained optimization. Expert Syst. Appl. 2013, 40, 5621–5634. [Google Scholar] [CrossRef]
Figure 1. Penalty function r p ( x ) converges to zero VS f ( x ) 2 and θ ( r , x ) 2 that is the optimal solution of the constrained problem.
Figure 1. Penalty function r p ( x ) converges to zero VS f ( x ) 2 and θ ( r , x ) 2 that is the optimal solution of the constrained problem.
Mathematics 10 01312 g001
Figure 2. Convergence graph for G01 to G07.
Figure 2. Convergence graph for G01 to G07.
Mathematics 10 01312 g002
Figure 3. Convergence graph for G08 to G13.
Figure 3. Convergence graph for G08 to G13.
Mathematics 10 01312 g003
Figure 4. Convergence graph for G14 to G24.
Figure 4. Convergence graph for G14 to G24.
Mathematics 10 01312 g004
Figure 5. The number of “wins-draws-losses” of GHMSA compared with other algorithms for G01 to G24 and Enp1 to Enp4.
Figure 5. The number of “wins-draws-losses” of GHMSA compared with other algorithms for G01 to G24 and Enp1 to Enp4.
Mathematics 10 01312 g005
Figure 6. Comparison Between GHMSA With CAMDE Regarding FES.
Figure 6. Comparison Between GHMSA With CAMDE Regarding FES.
Mathematics 10 01312 g006
Figure 7. Design engineering problems (Engp1 and Engp2).
Figure 7. Design engineering problems (Engp1 and Engp2).
Mathematics 10 01312 g007
Figure 8. Convergence graph for engineering problems (Engp1 and Engp2).
Figure 8. Convergence graph for engineering problems (Engp1 and Engp2).
Mathematics 10 01312 g008
Figure 9. Design engineering problems (Engp3 and Engp4 ).
Figure 9. Design engineering problems (Engp3 and Engp4 ).
Mathematics 10 01312 g009
Figure 10. Convergence graph for engineering problems (Engp3 and Engp4 ).
Figure 10. Convergence graph for engineering problems (Engp3 and Engp4 ).
Mathematics 10 01312 g010
Figure 11. Convergence graph of standard deviation for Enp1 to Enp4.
Figure 11. Convergence graph of standard deviation for Enp1 to Enp4.
Mathematics 10 01312 g011
Table 1. List of first and second types of test problems and their exact solutions.
Table 1. List of first and second types of test problems and their exact solutions.
prn f ( x * ) Kind of Function LI NI LE NE a
G113−15quadratic90006
G310−1.0005001000polynomial00011
G45−30,665.5386717834quadratic06002
G545126.4967140071cubic20033
G62−6961.8138755802cubic02002
G71024.3062090681quadratic35006
G82−0.0958250415nonlinear02000
G97680.6300573745polynomial04002
G1087049.2480205286linear33006
G1120.7499000000quadratic00011
G123−1.0000000000quadratic01000
G1350.0539415140nonlinear00033
G1410−47.7648884595nonlinear00303
G153961.7150222899quadratic00112
G165−1.9051552586nonlinear434004
G189−0.8660254038quadratic013006
G191532.6555929502nonlinear05000
G2410−5.5080132716polynomial00011
Table 2. Error values achieved if FES = 5 × 10 3 , FES = 5 × 10 4 , FES = 5 × 10 5 for G1, G3, G4, G5, G6 and G7.
Table 2. Error values achieved if FES = 5 × 10 3 , FES = 5 × 10 4 , FES = 5 × 10 5 for G1, G3, G4, G5, G6 and G7.
FES G1G3G4G5G6G7
Best 1.93 × 10 05 (0) 2.70 × 10 04 (0) 2.68 × 10 09 (0)0.02 (3) 8.00 × 10 08 (0)−1.26 (8)
Median 8.44 × 10 05 (0) 9.34 × 10 04 (0) 1.5 × 10 05 (0)0.41 (3) 3.7 × 10 06 (0)0.206 (8)
Worst 9.99 × 10 05 (0)1 (0) 4.25 × 10 05 (0)13.18 (3) 2.89 × 10 04 (0)28.16 (8)
5 × 10 3 c0, 0, 000, 0, 00, 3, 30, 0, 00, 8, 8
v ¯ 0 4.22 × 10 04 00.0200.016
Mean 8.15 × 10 05 1.77 × 10 01 1.9 × 10 05 1.97 1.7 × 10 05 6.644
s.d 1.70 × 10 05 0.380881 1.48 × 10 05 3.44 5.5 × 10 05 9.222
Best0 (0) 3.62 × 10 06 (0) 1.09 × 10 11 (0)0.0016 (3) 8 × 10 08 (0)−0.07 (4)
Median0 (0) 3.64 × 10 06 (0) 8.00 × 10 11 (0)0.02 (3) 1 × 10 06 (0) 4.55 × 10 03 (4)
Worst0 (0) 3.99 × 10 06 (0) 9.82 × 10 11 (0)1.32 (3) 4 × 10 05 (0)0.819 (4)
5 × 10 4 c0, 0, 00, 0, 00, 0, 00, 0, 30, 0, 00, 0, 0, 4
v ¯ 0 1.51 × 10 06 0 3.45 × 10 04 0 2 × 10 04
Mean0 3.71 × 10 06 6.90 × 10 11 0.166 5 × 10 06 −0.04
s.d0 1.24 × 10 07 2.86 × 10 11 0.3237 8 × 10 06 0.004
Best0 (0) 9.99 × 10 07 (0) 1.09 × 10 11 (0)0.0016 (0) 8 × 10 08 (0) 1 × 10 04 (0)
Median0 (0) 2.58 × 10 06 (0) 8.00 × 10 11 (0)0.0233 (0) 1 × 10 06 (0) 3 × 10 05 (0)
Worst0 (0) 8.50 × 10 06 (0) 9.82 × 10 11 (0)1.3182 (0) 4 × 10 05 (0) 410 05 (0)
5 × 10 5 c0, 0, 00, 0, 00, 0, 00, 0, 00, 0, 00, 0, 0
v ¯ 0 4.05 × 10 07 0 3.45 × 10 05 0 2.3 × 10 05
Mean0 2.37 × 10 06 6.90 × 10 11 0.166 5 × 10 06 4 × 10 05
s.d0 1.92 × 10 06 2.86 × 10 11 0.03237 8 × 10 06 4 × 10 05
Table 3. Error values achieved when FES = 5 × 10 3 , FES = 5 × 10 4 , FES = 5 × 10 5 for Problems G8, G9, G10, G11, G12 and G13.
Table 3. Error values achieved when FES = 5 × 10 3 , FES = 5 × 10 4 , FES = 5 × 10 5 for Problems G8, G9, G10, G11, G12 and G13.
FES G8G9G10G11G12G13
Best 1.05 × 10 10 (0)1.0467 (0)4.77 (0) 1.9 × 10 04 (1)0 (0) 1.25 × 10 04 (0)
Median 6.52 × 10 09 (0)1.494 (0)17.67 (0) 1.04 × 10 03 (1)0 (0) 3.83 × 10 03 (0)
Worst 4.13 × 10 08 (0)3.42 (0)300.41 (0) 5.66 × 10 03 (1)0 (0) 9.83 × 10 02 (0)
5 × 10 3 c0, 0, 00, 0, 00, 0, 00, 0, 10, 0, 00, 0, 0
v ¯ 0000.001730 7.27 × 10 05
Mean 9.24 × 10 09 1.86349.860.0016100.01171
Std 1.20 × 10 08 0.963175.170.0015100.02013
Best 1.05 × 10 10 (0)0.3489 (0)0.10 (0) 6.67 × 10 05 (0)0 (0) 8.96 × 10 06 (0)
Median 6.52 × 10 09 (0) 4.98 × 10 01 (0)0.35 (0) 9.60 × 10 05 (0)0 (0) 6.93 × 10 05 (0)
Worst 4.13 × 10 08 (0)1.14 (0)6.01 (0) 9.96 × 10 05 (0)0 (0) 2.94 × 10 04 (0)
5 × 10 4 c0, 0, 00, 0, 00, 0, 00, 0, 00, 0, 00, 0, 0
v ¯ 000 4.72 × 10 06 0 2.02 × 10 06
Mean 9.24 × 10 09 6.21 × 10 01 1.00 9.60 × 10 05 0 4.39 × 10 05
Std 1.20 × 10 08 0.3210341.50 1.70 × 10 06 0 6.53 × 10 05
Best 1.05 × 10 10 (0) 6.58 × 10 05 (0)0.02 (0) 6.67 × 10 05 (0)0 (0) 8.50 × 10 06 (0)
Median 6.52 × 10 09 (0) 8.53 × 10 05 (0)0.09 (0) 9.60 × 10 05 (0)0 (0) 5.70 × 10 05 (0)
Worst 4.13 × 10 08 (0) 9.88 × 10 05 (0)1.50 (0) 9.96 × 10 05 (0)0 (0) 9.90 × 10 05 (0)
5 × 10 5 c0, 0, 00, 0, 00, 0, 00, 0, 00, 0, 00, 0, 0
v ¯ 000 4.72 × 10 06 0 4.90 × 10 07
Mean 9.24 × 10 09 8.38 × 10 05 0.25 9.60 × 10 05 0 5.50 × 10 05
Std 1.20 × 10 08 1.52 × 10 05 0.38 1.70 × 10 06 0 2.70 × 10 05
Table 4. Error values achieved when FES = 5 × 10 3 , FES = 5 × 10 4 , FES = 5 × 10 5 for Problems G14, G15, G16, G18, G19 and G24.
Table 4. Error values achieved when FES = 5 × 10 3 , FES = 5 × 10 4 , FES = 5 × 10 5 for Problems G14, G15, G16, G18, G19 and G24.
FES G14G15G16G18G19G24
Best 4.25 × 10 01 (3) 5.83 × 10 02 (2) 4.31 × 10 04 (0)0.01 (0) 5.58 × 10 02 (3) 5.10 × 10 12 (0)
Median1.4 (3)0.18 (2)0.0064 (0)0.21 (0) 4.28 × 10 01 (3) 9.05 × 10 12 (0)
Worst1.53 (3)55.40 (2)0.0181 (0)0.79 (0)27.6 (3) 9.99 × 10 12 (0)
5 × 10 3 c0, 3, 30, 1, 20, 0, 00, 0, 00, 0, 30, 0, 0
v ¯ 2.62 × 10 02 1.16 × 10 03 00 9.80 × 10 03 0
Mean1.232.600.00760.274.07 8.57 × 10 12
s.d0.4010.790.00460.217.69 1.32 × 10 12
Best 2.85 × 10 07 (0) 1.12 × 10 07 (0) 7.70 × 10 11 (0) 4.51 × 10 06 (0) 5.58 × 10 03 (3) 5.10 × 10 12 (0)
Median 4.68 × 10 05 (0) 6.96 × 10 06 (0) 8.40 × 10 11 (0) 7.97 × 10 05 (0) 4.28 × 10 02 (3) 9.05 × 10 12 (0)
Worst 9.11 × 10 05 (0) 4.75 × 10 04 (0) 8.90 × 10 11 (0) 9.88 × 10 05 (0)2.76 (3) 9.99 × 10 12 (0)
5 × 10 4 c0, 0, 00, 0, 00, 0, 00, 0, 00, 0, 30, 0, 0
v ¯ 3.62 × 10 05 8.10 × 10 06 00 9.80 × 10 04 0
Mean 5.17 × 10 05 5.08 × 10 05 8.40 × 10 11 6.83 × 10 05 4.07 × 10 01 8.57 × 10 12
s.d 2.42 × 10 05 1.20 × 10 04 3.40 × 10 12 2.65 × 10 05 0.76855 1.32 × 10 12
Best 2.85 × 10 07 (0) 1.12 × 10 07 (0) 7.70 × 10 11 (0) 4.51 × 10 06 (0) 9.94 × 10 05 (0) 5.10 × 10 12 (0)
Median 4.68 × 10 05 (0) 6.96 × 10 06 (0) 8.40 × 10 11 (0) 7.97 × 10 05 (0) 3.21 × 10 05 (0) 9.05 × 10 12 (0)
Worst 9.11 × 10 05 (0) 4.75 × 10 04 (0) 8.90 × 10 11 (0) 9.88 × 10 05 (0) 8.70 × 10 05 (0) 9.99 × 10 12 (0)
5 × 10 5 c0, 0, 00, 0, 00, 0, 00, 0, 00, 0, 00, 0, 0
v ¯ 3.62 × 10 05 8.10 × 10 06 00 8.09 × 10 05 0
Mean 5.17 × 10 05 5.08 × 10 05 8.40 × 10 11 6.83 × 10 05 1.86 × 10 05 8.57 × 10 12
s.d 2.42 × 10 05 1.20 × 10 04 3.40 × 10 12 2.65 × 10 05 6.76 × 10 05 1.32 × 10 12
Table 5. Number of FES to achieve the fixed accuracy level ( ( f ( x ) f ( x * ) ) 0.0001 ) , success rate, feasible rate and success performance.
Table 5. Number of FES to achieve the fixed accuracy level ( ( f ( x ) f ( x * ) ) 0.0001 ) , success rate, feasible rate and success performance.
prBestMedianWorstMeans.df.r (%)s.r (%)s.p
G11964236027482386.68172.37742781001002386.68
G3768111,55813,54511,566.823531167.10563210010011,566.82353
G41906441749244295.6563.94510371001004295.6
G5-----00-
G63628445554094388.851852390.87629031001004388.851852
G734,773210,502500,559259,738.33224,164.15100100259,738.33
G8794110813501109.9615133.375261001001109.9615
G9417,565 4.33 × 10 05 495,232444,552.931,764.59100100444,552.9
G10-----00-
G116248814698778233.92917.80581001008233.92
G12117230339226.657.190209100100226.6
G1312,80037,26180,81442,242.0417,190.9405110010042242.04
G1428,36654,68771,29352,486.3076916,047.2782110010052,486.30769
G15925825,43590,72030,647.4419,355.5570310010030,647.44
G165758919911,3988970.761060.0144631001008970.76
G1810,30036,19885,88242,434.5620,906.4880910010042,434.56
G1973,800193,000499,000247,000187,852.1100100247,000
G24537755.5999744.846154103.587456100100744.846154
Table 6. How the null hypothesis is rejected (or accepted) and the decision is made.
Table 6. How the null hypothesis is rejected (or accepted) and the decision is made.
p-Value H 0 tDecision
< α reject<01
< α reject>0−1
> α accept-0
Table 7. Comparison of results for test problems G01 to G08.
Table 7. Comparison of results for test problems G01 to G08.
prs.tCB-ABCCCiALFNDECAMDEGHMSA
G1b.s−15−15−15−15−15
mean−15−15−15−15−15
s.d 5.03 × 10 15 2.39 × 10 08 000
decision00000
FES135,18030,819240,000240,0005773.84
G3b.s−1.0005−1.000501−1.0005001−1.000500−1.000009
mean−1.0005−1.000501−1.0005001−1.000500−1.000002
s.d 3.64 × 10 07 1.69 × 10 08 0 6.80 × 10 16 1.92 × 10 06
decision
FES90,09087,860240,000240,00062,546.38462
G4b.s−30,665.54−30,665.539−30,665.539−30,665.53867−30,665.53867
mean−30,665.54−30,665.539−30,665.539−30,665.53867−30,665.53867
s.d 8.72 × 10 11 9.80 × 10 06 0 3.71 × 10 12 3.49 × 10 07
decision0000
FES45,04526,268240,000240,0009671.32
G5b.s5126.505126.49675126.496715126.4967105126.49833
mean5126.505126.4975126.496715126.4967105126.662712
s.d 1.07 × 10 10 9.17 × 10 08 0 2.78 × 10 12 0.03442
decision−1−1−1−1
FES135,180156,248240,000240,00033,917.7702
G6b.s−6961.81−6961.814−6961.813875−6961.81388−6961.813826
mean−6961.81−6961.814−6961.813875−6961.81388−6961.813811
s.d 1.82 × 10 12 5.19 × 10 11 00 9.20 × 10 06
decision1011
FES45,04517,573240,000240,0008921.518519
G7b.s24.306224.306224.30620924.3062124.30610911
mean24.306224.306224.30620924.3062124.30617
s.d 4.16 × 10 07 6.82 × 10 07 1.35 × 10 14 8.55 × 10 15 4.34 × 10 05
decision1111
FES135,1808745240,000240,000259,738.33
G8b.s−0.095825−0.095825−0.095825−0.09583−0.0958141
mean−0.095825−0.095825−0.095825−0.09583−0.0957819
s.d 2.87 × 10 17 1.07 × 10 15 0 1.42 × 10 17 2.58 × 10 05
decision111−1
FES80004812240,000240, 0002394.577
The mark ‡ means that we do not use G03 to compare the result of the GHMSA with results of the four algorithms because the h(x*) = 0.0001, i.e., = 0.0001 in [45], but for the GHMSA is 4.05 × 10−07, see Table 1, Table 2 and Table 8.
Table 8. Statistical results of “GHMSA” Algorithm for first set of test problems and four mechanical engineering problems.
Table 8. Statistical results of “GHMSA” Algorithm for first set of test problems and four mechanical engineering problems.
prBestMedianWorstMeans.dFES
G1−15−15−15−1505773.84
G3−1.000009−1.000003−1.000001−1.000002 1.91679 × 10 06 62,546.38462
G4−30,665.538672−30,665.538672−30,665.53867−30,665.538672 3.49 × 10 07 9671.32
G55126.498335126.5200535127.814915126.6627120.0344233,917.7702
G6−6961.813826−6961.813811−6961.81377−6961.813811 9.19642 × 10 06 8921.518519
G724.3061091124.3061804224.3062537724.30617 4.34 × 10 05 259,738.33
G8−0.0958141−0.095824999−0.09582499−0.0957819 2.58 × 10 05 2394.577
G9680.6301232680.6301426680.6301562680.6301412 1.52 × 10 05 444,552.9
G107049.2718627049.6898887049.4603237049.336552 2.69 × 10 02 290,146
G110.749991760.7499960.750.74999610.00000018233.92
G12−1−1−1−101515
G130.0539500020.0539983580.0540403180.053996327 2.73 × 10 05 53,754
G14−47.76497953−47.76493525−47.76488874−47.76494056 2.51 × 10 05 52,486.30769
G15961.71502961.71502961.715107961.7149837 1.31 × 10 04 38,609.24
G16−1.905155259−1.905155259−1.905155259−1.905155259 2.06 × 10 10 36,346.76
G18−0.866025404−0.865945746−0.865926597−0.865958115 2.85 × 10 05 42,434.56
G1932.6554932.6555632.6556832.6555744 6.76 × 10 05 247,295.25
G24−5.508013272−5.508013272−5.508013272−5.508013272 1.09 × 10 10 2460.038
Enp15885.3327735885.3327735885.3327735885.332773 2.2 × 10 12 32,129
Enp20.0126652330.0126652680.0126652430.012665334 1.54 × 10 09 9970
Enp31.7248523061.7248523061.7248523061.724852306 1.33 × 10 16 24,270
Enp42994.4710662994.4710662994.4710662994.471066 4.27 × 10 15 16,764
Table 9. Comparison of results for test problems G09 to G15.
Table 9. Comparison of results for test problems G09 to G15.
prs.tCB-ABCCCiALFNDECAMDEGHMSA
G9b.s680.63680.63680.630057680.63006680.6301232
mean680.63680.63680.630057680.63006680.6301412
s.d 2.77 × 10 09 5.43 × 10 08 0 2.32 × 10 13 1.52 × 10 05
decision00−1−1
FES45,04512,801240,000240,000444,552.9
G10b.s7049.257049.2487049.248027049.248027049.271862
mean7049.257049.2487049.248027049.248027049.336552
s.d 3.98 × 10 05 6.04 × 10 07 3.41 × 10 09 4.39 × 10 12 2.69 × 10 02
decision−1−1−1−1
FES135,1802858240,000240,000240,146
G11b.s0.74990.7498960.7499990.7499000.74999176
mean0.74990.7498980.7499990.7499000.7499961
s.d 1.29 × 10 10 2.05 × 10 16 0 1.13 × 10 16 0.0000001
decision−1−11−1
FES90,090168,448240,000240,0008233.92
G12b.s−1−1−1−1−1
mean−1−1−1−1−1
s.d0 7.76 × 10 11 000
decision0000
FES13,50017,892240,000240,0001515
G13b.s0.0539420.0539420.05394150.053940.053950002
mean0.066770.0539430.05394150.053940.053996327
s.d 6.91 × 10 02 4.03 × 10 06 0 2.32 × 10 17 2.73 × 10 05
decision1−1−1−1
FES198,27019,883240,000240,00053,754
G14b.s−47.7649−47.764900−47.7648885−47.764890−47.76497953
mean−47.7649−47.764900−47.7648885−47.764890−47.76494056
s.d 1.02 × 10 05 4.04 × 10 08 5.14 × 10 15 2.21 × 10 14 2.51 × 10 05
decision1111
FES239,715152,697240,000240,00052,486.30769
G15b.s961.715961.715961.7150223961.715020961.71502
mean961.715961.715961.7150223961.715020961.7149837
s.d 2.81 × 10 11 1.86 × 10 08 0 5.80 × 10 13 1.31 × 10 04
decision0011
FES135, 18077, 910240, 000240, 00038, 609.24
Table 10. Comparison of results for test problems G16, G18, G19, G24 and Enp1-Enp4.
Table 10. Comparison of results for test problems G16, G18, G19, G24 and Enp1-Enp4.
prs.tCB-ABCCCiALFNDECAMDEGHMSA
G16b.s−1.905−1.905155−1.90515525−1.905160−1.905155259
mean−1.905−1.905155−1.90515525−1.905160−1.905155259
s.d 7.90 × 10 11 9.77 × 10 09 0 4.53 × 10 16 2.06 × 10 10
decision1110
FES45,045196,196240,000240,00036,346.76
G18b.s−0.866025−0.866026−0.8660254−0.86603−0.866025404
mean−0.866025−0.866026−0.8660254−0.86603−0.865958115
s.d 1.72 × 10 08 3.58 × 10 07 0 4.53 × 10 17 2.85 × 10 05
decision−1−1−1−1
FES135,1808742240,000240,00042,434.56
G19b.s32.655632.65561032.6555937732.65559032.65549
mean32.655632.66077032.6556260332.65559032.6555744
s.d 1.88 × 10 05 2.35 × 10 04 3.73 × 10 05 7.11 × 10 15 6.76 × 10 05
decision0110
FES198,270240,000240,000240,000247,295.25
G24b.s−5.508−5.508013−5.50801327−5.508010−5.508013272
mean−5.508−5.508013−5.50801327−5.508010−5.508013272
s.d 7.15 × 10 15 1.30 × 10 08 0 9.06 × 10 16 1.09 × 10 10
decision1111
FES27,0006450240,000240,0002460.038
Enp1b.s6059.716059.7143356059.7143356059.7143355885.332773
mean6126.626059.7143356059.7143356059.7143355885.332773
s.d 1.14 × 10 02 1.01 × 10 11 4.56 × 10 07 1.22 × 10 06 2.2 × 10 12
Decision1111
FES15,00012,00020,00010,00032,1290
Enp2b.s0.0126650.0126652330.0126652320.0126652330.01266523
mean0.0126710.0126652510.0126688990.0126669810.01266533
s.d 1.42 × 10 05 9.87 × 10 08 5.38 × 10 06 3.65 × 10 06 1.54 × 10 09
Decision1011
FES15,000500024,00010,0009970
Enp3b.s1.7248521.7248521.7248523091.7248521.724852
mean1.7248521.7248521.7248523091.7248521.724852
s.d0 5.11 × 10 07 3.73 × 10 12 2.32 × 10 13 1.33 × 10 16
Decision0010
FES15,00010,000800010,00024,270
Enp4b.s2994.4710662994.4710662994.4710662994.4710662994.471065
mean2994.4710662994.47106602994.471066102994.4710662994.471065
s.d 2.48 × 10 07 2.31 × 10 12 4.17 × 10 12 2.20 × 10 12 4.27 × 10 15
Decision0010
FES15,00010,00018,00010,00016,764
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alnowibet, K.A.; Mahdi, S.; El-Alem, M.; Abdelawwad, M.; Mohamed, A.W. Guided Hybrid Modified Simulated Annealing Algorithm for Solving Constrained Global Optimization Problems. Mathematics 2022, 10, 1312. https://doi.org/10.3390/math10081312

AMA Style

Alnowibet KA, Mahdi S, El-Alem M, Abdelawwad M, Mohamed AW. Guided Hybrid Modified Simulated Annealing Algorithm for Solving Constrained Global Optimization Problems. Mathematics. 2022; 10(8):1312. https://doi.org/10.3390/math10081312

Chicago/Turabian Style

Alnowibet, Khalid Abdulaziz, Salem Mahdi, Mahmoud El-Alem, Mohamed Abdelawwad, and Ali Wagdy Mohamed. 2022. "Guided Hybrid Modified Simulated Annealing Algorithm for Solving Constrained Global Optimization Problems" Mathematics 10, no. 8: 1312. https://doi.org/10.3390/math10081312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop