1. Introduction
Global optimization is rich in content and widely used subject in mathematics. With the development of science and information technology, global optimization has been widely applied to economic models, finance, image processing, machine designing and so on. Therefore, the theories and methods for global optimization need to be studied deeply. With the efforts of scholars in this field, various methods have been developed for global optimization. However, finding the global optimal solution is usually not easy due to the two properties of the global optimization problems: (1) usually there exists a lot of local optimal solutions, and (2) optimization algorithms are very easy to be trapped in certain local optimal solutions and unable to escape. Therefore, one key problem is how to help the optimization method escapes from local optimal solutions. The filled function method is specifically designed to solve this problem. Now we will introduce some basic information about the filled function method.
The filled function method was first proposed by Ge [
1] in which he constructed an auxiliary function named filled function to help the optimization algorithm escape from local optimal solutions. In the following, we will introduce the original definition of the filled function proposed by Ge [
1] and its related concepts. In this paper, we consider the following optimization problem:
where
n is the dimension of the objective function
$F\left(x\right)$, which is continuous and differentiable.
$F\left(x\right)$ has a finite number of local optimal solutions
${x}_{1}^{*},{x}_{2}^{*},\dots ,{x}_{m}^{*}$. Suppose
${x}_{k}^{*}$ is the local optimal solution found by the optimization algorithm in the
$kth$ iteration; the definition of basin
${B}_{k}^{*}$ is as follows.
Definition 1. The basin ${B}_{k}^{*}$ of the objective function $F\left(x\right)$ at an isolated minimum (local optimal solution) ${x}_{k}^{*}$ refers to the connected domain that contains ${x}_{k}^{*}$, and in this domain, the steepest descent trajectory of $F\left(x\right)$ will converge to ${x}_{k}^{*}$ starting from any initial point, but outside the basin, the steepest descent trajectory of $F\left(x\right)$ does not converge to ${x}_{k}^{*}$.
A basin
${B}_{1}^{*}$ at
${x}_{1}^{*}$ is lower (or higher) than basin
${B}_{2}^{*}$ at
${x}_{2}^{*}$ iff
A basin is actually an area that contains one local optimal solution. Within this area, the gradient descent optimization algorithm will converge to the corresponding local optimal solution no matter what the initial point is. One basin is lower than the other basin if its corresponding local optimal solution is smaller (better for minimization problems).
Definition 2. A function $P(x,{x}_{1}^{*})$ is called a filled function of $F\left(x\right)$ at a local minimum ${x}_{1}^{*}$ if it satisfies the following properties:
 1.
${x}_{1}^{*}$ is a strictly local maximum of $P(x,{x}_{1}^{*})$, and the whole basin ${B}_{1}^{*}$ of $F\left(x\right)$ becomes a part of a hill of $P(x,{x}_{1}^{*})$.
 2.
$P(x,{x}_{1}^{*})$ has no minima or stable points in any basin of $F\left(x\right)$ higher than ${B}_{1}^{*}$.
 3.
If $F\left(x\right)$ has a lower basin than ${B}_{1}^{*}$, then there is a point ${x}^{\prime}$ in such a basin that minimizes $P(x,{x}_{1}^{*})$ on the line through x and ${x}_{1}^{*}$.
From the definition of the filled function, we can see that the three properties together ensure the optimization algorithm escapes from one local optimal solution to a better one. For example, the optimization algorithm is now trapped in a local optimal solution ${x}_{1}^{*}$ and can not escape. Now we construct a filled function to help escape ${x}_{1}^{*}$. The first property of the filled function will make the local optimal solution ${x}_{1}^{*}$ become a local worst solution of the filled function. In this case, when the filled function $P(x,{x}_{1}^{*})$ is optimized, it will surely leave the local worst solution; that is, the local optimal solution will escape. Then the second property of the filled function will ensure that when optimizing $P(x,{x}_{1}^{*})$, it will not end up at a worse solution than ${x}_{1}^{*}$ because there are no minima or stable points in any basin higher than ${B}_{1}^{*}$. Instead, the optimization procedure of $P(x,{x}_{1}^{*})$ will enter a basin that is better than ${B}_{1}^{*}$, if such a basin exists. The overall optimization procedure is as follows: First, it starts from an initial point to optimize the objective function $F\left(x\right)$ and find a locally optimal solution (e.g., ${x}_{1}^{*}$). Secondly, a filled function at this point is constructed (e.g., $P(x,{x}_{1}^{*})$) and optimized starting from ${x}_{1}^{*}$. After the optimization of $P(x,{x}_{1}^{*})$, the algorithm will enter a better region (basin) ensured by the properties of the filled function. Thirdly, starting from the new better basin, the algorithm continues to optimize $F\left(x\right)$ to find a better local optimal solution than ${x}_{1}^{*}$. Then, repeating the above steps, the algorithm will continuously move from one local optimal solution to better ones till the global optimal solution is found.
2. Related Work
With optimization algorithms extensively used in various fields, more and more efforts are devoted to optimization theory. As a deterministic algorithm for optimization, the filled function method has drawn a lot of attention. The main idea of the filled function method is to locate a current local optimal solution by any local search algorithm and then construct an auxiliary function called the filled function at that local optimal solution. The filled function should have three good properties in order to help it escape from the current local optimal solutions and enter regions that contain better solutions.
The first definition of a filled function was proposed by Ge in ref. [
1], in which he constructed a filled function with two parameters:
Experiments show this filled function is effective. However, it has two disadvantages: first, this filled function has two adjusting parameters r and $\rho $, the values of the two parameters need to be adjusted in order to ensure the global optimal solution will not be missed in the optimization procedure; secondly, since there is an exponent term in denominator when it gets larger, the function value will become smaller, and thus, the filled function may locate a fake stable point.
In order to improve the efficiency of the filled function method, a lot of effort has been made, and new contributions have been achieved. In ref. [
2], a filled function with only one parameter is proposed that also has no exponent term:
However, this filled function
$H\left(x\right)$ is undefined at
$f\left(x\right)f\left({x}_{1}\right)\le 1$. To overcome the discontinuous and nondifferential disadvantages of filled function, Liu proposed a class of filled functions that is continuous and differentiable [
3]:
where
u and
w are two real functions that are twice continuously differentiable in their domains, satisfying the following conditions:
However, this class of filled function is not easy to construct and still contains two parameters to adjust. Afterward, new continuous differentiable filled functions were proposed [
4,
5,
6], yet these filled functions all have one or two parameters. In order to improve the parameteradjusting problem of the filled function, the authors of [
7] proposed a filled function with two parameters and gave a reasonable and effective way to choose the parameters. In ref. [
8], the authors proposed a filled function without any parameter. This filled function contains no exponent term and is simple in form; however, it is not a continuous differentiable, which may produce extra local optimal solutions. To overcome this problem, the authors of [
9] proposed a continuously differentiable filled function without any adjusting parameter:
Afterward, researchers have proposed more continuous differentiable filled functions without parameters, such as in ref. [
10,
11,
12,
13,
14]. The authors of [
12] proposed a new continuous differentiable filled function without any parameter or exponent term:
These parameterless and continuous differentiable filled functions have several advantages. First, more efficient local search algorithms can be applied. Secondly, it is not easy to produce extra fake local optimal solutions. Thirdly, no parameter adjusting is needed. Thus this kind of filled function can improve the efficiency of the performance of filled function methods.
To better enhance the efficiency of filled function methods, a twostage method with a stretch function was proposed [
15]. After a current local minimum is located in the stage of optimizing the objective function, a stretch function is used to make this local minimum higher. Then a filled function is constructed and optimized in the second stage. However, this filled function is not continuous, which means classical efficient local search methods can not be applied to this method.
The authors of [
16] proposed a new algorithm based on filled function. First, a multidimensional objective function is transformed into onedimensional functions, and then, for each direction, a filled function is constructed to optimize the onedimensional function. To overcome the potential failure that only local minimum is found, the authors of [
17] proposed a new filled function method. By combing an adaptive strategy of determining the initial points and a narrow valley widening strategy, the ability to escape the local minimum and locate the global minimum is further enhanced. In ref. [
18], the authors proposed a new filled function using a smoothing method to eliminate local optimal solutions. Further, an adaptive method is used to determine the step length and shallow valleys.
Now the filled function method is not only used in unconstrained optimization problems but is extended to constrained optimization problems with inequalities, bilevel programming, nonlinear integer programming and nonsmooth constrained problems. The authors of [
19] proposed a continuous differentiable filled function with one parameter to solve constrained optimization problems. The authors of [
20] proposed a singleparameter filled function and applied it to a supply chain problem, which is a nonlinear programming problem with equality and inequality constraints. For bilevel programming with inequality and equality constraints, the authors of [
21] first transformed the bilevel programming problem into a singlelayer constrained optimization problem and then constructed the filled function combing penalty functions.
The authors of [
22] first transformed the original problem into an equivalent constrained optimization problem and then constructed a filled function to solve it.
For the following type of constrained global optimization
P:
where
${Z}^{n}$ is an integer set of
${R}^{n}$ and
$S=\{x\in {Z}^{n}{g}_{i}\left(x\right)\le 0,i=1,2,\dots ,m\}$ is bounded. The authors of [
23] proposed a method to transform this constrained problem into a boxconstrained integer programming problem and then constructed a filled function to solve it. In ref. [
24], the authors proposed a parameterless filled function to solve nonlinear equations with box constraints.
The filled function method is also extended to nonsmoothing optimization problems. The authors of [
25,
26] proposed a oneparameter filled function based on a new definition of filled function for a nonsmoothingconstrained programming problem.
Based on the idea of filled function, in this paper, a new parameterless filled function is proposed that is continuous and differentiable. The properties of the new filled function is proven in
Section 3. Based on it, a filled function algorithm is also proposed to handle unconstrained optimization problems. Numerical experiments are carried out, and comparisons are made in
Section 4.
3. A New Parameterless Filled Function and a Filled Function Algorithm
In this section, a new filled function is proposed with the advantages of being parameterless, continuous and differentiable. The three properties of the proposed filled function are described and proven. Based on it, a new filled function method is designed to solve unconstrained optimization problems.
3.1. A New Parameterless Filled Function and Its Properties
The first definition of the filled function is defined in [
1]. However, the third property of the definition is not so clear, e.g., it is not clear where the point
${x}^{\prime}$ is and where the line is through
x and
${x}_{1}^{*}$. To make the definition more clear and more strict, some scholars gave several revised definitions of the filled function [
27,
28]. In this paper, we will use the revised definition from ref. [
9] since it is more clear and strict by using the gradient. The revised definition is as follows.
Definition 3. A function $P(x,{x}_{k}^{*})$ is called a filled function of $F\left(x\right)$ at a local minimum ${x}_{k}^{*}$ if it satisfies the following properties:
 1.
${x}_{k}^{*}$ is a strictly local maximum of $P(x,{x}_{k}^{*})$, and the whole basin ${B}_{k}^{*}$ of $F\left(x\right)$ becomes a part of a hill of $P(x,{x}_{k}^{*})$.
 2.
For any $x\in {\Omega}_{1}$, we have $\u25bdP(x,{x}_{k}^{*})\ne 0$, where ${\Omega}_{1}=\{x\in \Omega F\left(x\right)\ge F\left({x}_{k}^{*}\right),x\ne {x}_{k}^{*}\}$.
 3.
If ${\Omega}_{2}=\{x\in \Omega F\left(x\right)<F\left({x}_{k}^{*}\right)\}$ is not empty, then there exists ${x}_{k}^{{}^{\prime}}\in {\Omega}_{2}$, such that ${x}_{k}^{{}^{\prime}}$ is a local minimum of $P(x,{x}_{k}^{*})$.
Now we give a brief explanation of the revised definition of the filled function. Property 1 is the same as the original definition, which turns the local minimum ${x}_{k}^{*}$ of the objective function into a local maximum of the filled function. In this case, when optimizing the filled function, it is easy to escape ${x}_{k}^{*}$ since it is a local maximum (worst solution for the minimization problem). Property 2 makes sure the optimization procedure will not end up with solutions worse than the current local minimum ${x}_{k}^{*}$ because there are no stationary points there. Property 3 means that it is easy for the optimization procedure to end in a region that contains a better solution than the current local minimum because in that region, there exists a local minimum. Therefore, the three properties together will drive the optimization procedure to escape from the current local minimum and enter a better region that contains a better solution.
Based on Definition 3, we design a new parameterless filled function that is also continuous and differentiable:
The new filled function mainly has two advantages. First, it has no parameter to adjust, which makes it easier to apply to different optimization problems. Secondly, the new filled function is continuous and differentiable. Note that continuity and differentiability are two excellent properties of filled function. Compared to filled functions that are not continuous or differentiable, it is easier to optimize since more choices of algorithms, especially more efficient algorithms designed for continuous differentiable functions can be used, and it is also not easy to generate extra local optimal solutions during the optimization. Now, we will first prove that the new filled function is continuously differentiable and then prove that it fulfills the three properties of the definition of the filled function.
Since the only point that may cause the filled function $P(x,{x}_{k}^{*})$ to not be continuously differentiable is $t=0$ in $g\left(t\right)$, so if $g\left(t\right)$ is continuously differentiable at $t=0$, the filled function $P(x,{x}_{k}^{*})$ is continuously differentiable.
Since $\underset{t\to {0}^{+}}{lim}g\left(t\right)=\underset{t\to {0}^{}}{lim}g\left(t\right)=1$, the new filled function $P(x,{x}_{k}^{*})$ is continuous.
Thus, ${g}_{+}^{\prime}\left(0\right)={g}_{}^{\prime}\left(0\right)=0$, so the new filled function $P(x,{x}_{k}^{*})$ is differentiable. Now we will prove that $P(x,{x}_{k}^{*})$ satisfies the three properties of filled function.
Theorem 1. Suppose ${x}_{k}^{*}$ is a local minimum of the objective function $F\left(x\right)$ and $P(x,{x}_{k}^{*})$ is the filled function constructed at ${x}_{k}^{*}$, then ${x}_{k}^{*}$ is a strictly local maximum of $P(x,{x}_{k}^{*})$.
Proof. Suppose
${B}_{k}^{*}$ is the basin containing
${x}_{k}^{*}$ (please refer to Definition 1 about basin), since
${x}_{k}^{*}$ is a local minimum of
$F\left(x\right)$, so
$\forall x\in {B}_{k}^{*},x\ne {x}_{k}^{*}$, we have
$F\left(x\right)>F\left({x}_{k}^{*}\right)$. Thus,
$F\left(x\right)F\left({x}_{k}^{*}\right)>0$, in this case
$g(F\left(x\right)F\left({x}_{k}^{*}\right))=1$. According to the construction of the filled function
$P(x,{x}_{k}^{*})$, we get
Thus, $P(x,{x}_{k}^{*})<P({x}_{k}^{*},{x}_{k}^{*})$, which means that ${x}_{k}^{*}$ is the strict local maximum of $P(x,{x}_{k}^{*})$. □
Theorem 2. For any $x\in {\Omega}_{1}$, we have $\u25bdP(x,{x}_{k}^{*})\ne 0$, where ${\Omega}_{1}=\{x\in \Omega F\left(x\right)\ge F\left({x}_{k}^{*}\right)$, $x\ne {x}_{k}^{*}\}$.
Proof. Since
${\Omega}_{1}=\{x\in \Omega F\left(x\right)\ge F\left({x}_{k}^{*}\right),x\ne {x}_{k}^{*}\}$, for any
$x\in {\Omega}_{1}$, we have
$F\left(x\right)\ge F\left({x}_{k}^{*}\right)$, thus
and
This proves Theorem 2. □
Theorem 3. If ${\Omega}_{2}=\{x\in \Omega F\left(x\right)<F\left({x}_{k}^{*}\right)\}$ is not empty, then there exists ${x}_{k}^{{}^{\prime}}\in {\Omega}_{2}$, such that ${x}_{k}^{{}^{\prime}}$ is a local minimum of $P(x,{x}_{k}^{*})$.
Proof. Since ${\Omega}_{2}$ is not empty, then $F\left(x\right)$ must have a minimum in ${\Omega}_{2}$.
Since $P(x,{x}_{k}^{*})$ is continuous and differentiable on ${R}^{n}$, it must have a minimum, say ${x}_{k}^{{}^{\prime}}$ at ${\Omega}_{2}$. Because $P(x,{x}_{k}^{*})$ is differentiable at ${x}_{k}^{{}^{\prime}}$, then this minimum ${x}_{k}^{{}^{\prime}}$ must be a stationary point, that is, $\nabla P({x}_{k}^{{}^{\prime}},{x}_{k}^{*})=0$.
Since ${\Omega}_{2}$ is not empty, then there exists a point $z\in {\Omega}_{2}$ such that $P(z,{x}_{k}^{*})<0$. Thus $P({x}_{k}^{{}^{\prime}},{x}_{k}^{*})\le P(z,{x}_{k}^{*})<0$ and ${x}_{k}^{{}^{\prime}}\ne {x}_{k}^{*}$. Therefore, we know that ${x}_{k}^{{}^{\prime}}\notin {\Omega}_{1}$ (according to the definition of ${\Omega}_{1}$ from Theorem 2); therefore, ${x}_{k}^{{}^{\prime}}\in {\Omega}_{2}$. □
3.2. A Filled Function Algorithm to Solve Unconstrained Optimization Problems
Based on the proposed filled function, we design a filled function algorithm to solve unconstrained optimization problems. The steps of the algorithm are as follows.
Initialization. Randomly generate 10 points in the feasible region and choose the point with the best function value as the initial point ${x}_{0}$. Then, set $bestX={x}_{0}$, $bestVal=F\left({x}_{0}\right)$ to record the best solution and its corresponding function value, we set $\u03f5=e10$ as the stopping criteria and $k=1$ as the iteration counter.
Optimize the objective function $F\left(x\right)$. Starting from the initial point ${x}_{0}$, we use the BFGS QuasiNewton Method as the local search method to optimize the objective function to locate a local optimal point ${x}_{k}^{*}$. The main steps of the BFGS method are shown in Algorithm 1.
Construct the filled function at
${x}_{k}^{*}$:
Optimize the filled function $P(x,{x}_{k}^{*})$. Set ${x}_{k}^{*}$ + $0.1$ as the initial point, and use the BFGS QuasiNewton Method local search method to optimize the filled function $P(x,{x}_{k}^{*})$ to obtain a local minimum point ${x}_{k}^{{}^{\prime}}$ of $P(x,{x}_{k}^{*})$. It is known from property 3 of the filled function that point ${x}_{k}^{{}^{\prime}}$ must lie in a lower basin than ${x}_{k}^{*}$.
Set the point ${x}_{k}^{{}^{\prime}}$$+0.1$ as the initial point, and continue to optimize the objective function $F\left(x\right)$ to obtain a new local minimum point ${x}_{k+1}^{*}$ of $F\left(x\right)$.
Determine whether $F\left({x}_{k+1}^{*}\right)bestVal$ is less than $\u03f5$. If satisfied, update $bestX$ by ${x}_{k+1}^{*}$ and $bestVal$ by $F\left({x}_{k+1}^{*}\right)$, let $k=k+1$. Go to step 2, otherwise, $bestX$ is the global optimum and the algorithm terminates.
Algorithm 1 Main steps of the BFGS QuasiNewton Method 
 1:
Given an initial value ${x}_{0}$ and an accuracy threshold $\u03f5$, set ${D}_{0}=I,k:=0$.  2:
Determine the direction of the search: ${d}_{k}={D}_{k}\xb7{g}_{k}$.  3:
set ${S}_{k}={\lambda}_{k}{d}_{k}$, ${X}_{k+1}:={X}_{k}+{S}_{k}$, and ${\lambda}_{k}=argminf({X}_{k}+\lambda {d}_{k}),\lambda \in R$.  4:
if $\parallel {g}_{k+1}\parallel <\u03f5$, then, the algorithm ends.  5:
Calculate ${y}_{k}={g}_{k+1}{g}_{k}$.  6:
Calculate ${D}_{k+1}=(I\frac{{S}_{k}{y}_{k}^{T}}{{y}_{k}^{T}{S}_{k}}){D}_{k}(I\frac{{y}_{k}{S}_{k}^{T}}{{y}_{k}^{T}{S}_{k}})+\frac{{S}_{k}{S}_{k}^{T}}{{y}_{k}^{T}{S}_{k}}$.  7:
let $k:=k+1$, go to Step 2

In the following, we use an example to demonstrate the optimization procedure of the filled function algorithm.
Figure 1 shows the objective function
$f\left(x\right)=x+10sin\left(5x\right)+7cos\left(4x\right)$ with the search region [−2, 2]. From
Figure 1, we can see that
$f\left(x\right)$ has three basins
${B}_{1}^{*}$,
${B}_{2}^{*}$ and
${B}_{3}^{*}$ in the search region, where
${B}_{3}^{*}$ is the lowest basin that contains the global optimal solution. Suppose the optimization procedure starts from
${x}_{0}$; using the BFGS local search method we can obtain a local minimal solution
${x}_{1}^{*}$ of the objective function
$f\left(x\right)$.
To escape from this local minimum
${x}_{1}^{*}$, we construct the filled function
$P(x,{x}_{1}^{*})$ at
${x}_{1}^{*}$, as shown in
Figure 2.
From
Figure 2, we can see that
${x}_{1}^{*}$ is a strictly local maximum (maximal point) of
$P(x,{x}_{1}^{*})$, which is guaranteed by the definition of the filled function. Therefore, a local search of
$P(x,{x}_{1}^{*})$, starting from point
${x}_{1}^{*}+0.1$, can easily escape from this point and yield a local minima
${x}_{1}^{\prime}$ of
$P(x,{x}_{1}^{*})$. Next, using
${x}_{1}({x}_{1}={x}_{1}^{\prime}+0.1)$ as the initial point to optimize the objective function
$f\left(x\right)$, we can obtain another local minimal solution
${x}_{2}^{*}$ that is better than
${x}_{1}^{*}$. At this time, the first iteration is completed.
To escape from the local minimum
${x}_{2}^{*}$, we repeat the above steps to construct the filled function
$P(x,{x}_{2}^{*})$ at
${x}_{2}^{*}$, as shown in
Figure 3.
Similarly, $P(x,{x}_{2}^{*})$ peaks at point ${x}_{2}^{*}$, which makes it easy to escape from this point. We continue to optimize $P(x,{x}_{2}^{*})$ to obtain a local minimal point ${x}_{2}^{\prime}$. Then, using ${x}_{2}={x}_{2}^{\prime}+0.1$ as the initial point to optimize the objective function $f\left(x\right)$, a new better local optimal solution ${x}_{3}^{*}$ is obtained. Now, the second iteration is completed. We continue the above procedure to optimize the objective function and filled function alternately to escape from the current local optimal solution to a better one till the global optimal solution is located.
From the above optimization procedure, we can clearly see that the proposed method can easily and continuously escape from a current local optimal solution to obtain a better one till the global optimal solution is located. This is a good way to overcome the disadvantage of premature convergence of optimization algorithms. Moreover, the proposed method also has three other advantages. First, since the proposed filled function is parameterless; the algorithm has no adjustable parameters to tune for different problems. Secondly, since the new filled function is continuous and differentiable, the proposed algorithm is less apt to produce an extra local minimum while more choices of local search methods, especially the efficient gradientbased ones, can be applied to make the optimization more efficient and effective. Thirdly, once the filled function is designed and constructed, it is easy to implement and apply to different optimization problems.
There are mainly two disadvantages of the filled function method. First, it is not easy to design a good filled function and each time when a local optimal solution is found, the filled function has to be constructed. Secondly, the filled function method becomes less effective when the dimensionality of the problem is large. More research is needed to extend the scope of the filled function method.
4. Numerical Experiments
The proposed filled function algorithm is implemented in Matlab 2021 and tested on wildly used test problems. Comparisons are made with a stateoftheart filled function algorithm [
18], another continuous differentiable filled function algorithm [
5] and Ge’s filled function algorithm [
1]. The test problems used in this paper are listed as follows.
Test case 1. (The rastrigin function)
The global minimum solution is
${x}^{*}={(0,0)}^{T}$, and the corresponding function value is
$F\left({x}^{*}\right)=2$.
Test case 2. (Twodimensional function)
where
$c=0.05,0.2,0.5$. The global minimum solution is
${x}^{*}={(1,0)}^{T}$, and the corresponding function value is
$F\left({x}^{*}\right)=0$ for all values of
c.
Test case 3. (Threehump back camel function)
The global minimum solution is
${x}^{*}={(0,0)}^{T}$, and the corresponding function value is
$F\left({x}^{*}\right)=0$.
Test case 4. (Sixhump back camel function)
The global minimum solution is ${x}^{*}={(\pm 0.0898,\pm 0.7127)}^{T}$, and the corresponding function value is $F\left({x}^{*}\right)=1.0316$.
Test case 5. (Treccani function)
The global minimum solution is
${x}^{*}={(2,0)}^{T}$ and
${x}^{*}={(0,0)}^{T}$, and the corresponding function value is
$F\left({x}^{*}\right)=0$.
Test case 6. ( Twodimensional Shubert function)
There are multiple local minimum solutions in the feasible region, and the global minimum function value is
$F\left({x}^{*}\right)=186.7309$.
Test case 7. (
ndimensional function)
where
The global minimum solution is
${x}^{*}={(1,1,\dots ,1)}^{T}$, and the corresponding function value is
$F\left({x}^{*}\right)=0$ for all values of
n.
First, all results obtained by the new filled function algorithm are listed in
Table 1,
Table 2,
Table 3,
Table 4,
Table 5,
Table 6,
Table 7,
Table 8,
Table 9,
Table 10,
Table 11,
Table 12,
Table 13 and
Table 14. Further, the comparisons are made with another continuous differentiable filled function algorithm, CDFA in [
5]. In these tables, we use the following notations:
${x}_{k}^{*}:$ the local minimum of the objective function in the $kth$ iteration.
${f}_{k}^{*}:$ the function value of the objective function at ${x}_{k}^{*}$.
$k:$ the iteration counter.
${F}_{f}:$ the total function evaluations of the objective function and the filled function.
CDFA: the filled function algorithm proposed in [
5].
FFFA: the filled function algorithm proposed in [
18].
Table 1,
Table 2,
Table 3,
Table 4,
Table 5,
Table 6,
Table 7,
Table 8,
Table 9,
Table 10,
Table 11,
Table 12,
Table 13 and
Table 14 show the numerical results of the test problems in different test criteria (different parameters and different dimensions) obtained by the proposed filled function algorithm. In these tables,
k means the iterations for the filled function algorithm to locate the global minimum solution
${x}^{*}$,
$F\left({x}^{*}\right)$ is the corresponding function value and
n is the dimension of the test problem.
From the numerical results, we can see that the proposed filled function algorithm can locate all the global minima solutions successfully (some with a precision error of less than ${10}^{10}$) and within small iterations.
For these test problems, we also listed the results of another continuous differentiable filled function algorithm, CDFA [
5]. Since CDFA just carried out parts of the test problems, we use slashes ( / ) to indicate the missed values. From the comparison, we can see that for problem 2 (
c = 0.2), we use one less iteration to locate a minimum solution with six orders of magnitude higher accuracy than CDFA. For problem 2 (
c = 0.5), although we use one more iteration than CDFA, we successfully located the global minimum 0. For problem 2 (
c = 0.05), we use one less iteration and obtain a better result than CDFA. For problem 3 we locate a better result (five orders of magnitude higher accuracy) than CDFA with the same iterations. For problems 5 and 7 (
n = 7), our algorithm use one more iteration than CDFA. For problem 6 we use one less iteration to locate the global minimum than CDFA. For problem 7 (
n = 10), although we use one less iteration, CDFA located the global minimum 0 while we get
$1.51\times {10}^{13}$. From the above analysis, we can achieve the comparison results that our algorithm has four wins (problem 2 with
c = 0.2,
c = 0.05, and problems 3 and 6), two losses (problems 5 and 7 with
n = 7), and three ties (problem 2 with
c = 0.5, and problems 4 and 7 with
n = 10) out of the overall ten test problems. Therefore, we come to the conclusion that the proposed filled function algorithm is more effective than CDFA.
Comparisons are also made with a stateoftheart filled function algorithm, FFFA [
18] and Ge’s filled function [
1]. The comparison results are listed in
Table 15, where
$No.$ is the number of test problems and
n is the dimension,
${F}_{f}$ refers to the total number of function evaluations consumed to obtain the optimal solutions (minimum solutions for minimization problems).
Since all three filled function algorithms can find the global minimum solutions, we compare their efficiency by the total number of iterations and function evaluations consumed by each algorithm. From
Table 15, we can see that for all test problems, our algorithm is much better than Ge’s algorithm. As for the comparison with FFFA, we can see that for test problems 1, 3 and 4, although our algorithm takes one more iteration to get the optimal solution, we use much fewer function evaluations. For problems 2, 5 and 6, our algorithm uses fewer iterations and fewer function evaluations than that of FFFA. For problem 7, we can see that for dimension
$n=2$, our algorithm uses fewer iterations but more function evaluations than FFFA, for
$n=3$ and
$n=7$, our algorithm uses more function evaluations than that of FFFA, but for
$n=5$ and
$n=10$, our algorithm performs much better than that of FFFA. We can see that for
$n=5$ our algorithm uses three fewer iterations and only uses 2287 function evaluations, while FFFA uses 12,681 function evaluations; for
$n=10$, our algorithm only uses 12,795 function evaluations (which is nearly half of that of FFFA’s 20,044) to get the global optimal solution. Overall, we come to the conclusion that the filled function algorithm proposed in this paper is more efficient than FFFA. From the numerical results and comparison with the other three filled function algorithms, we come to the conclusion that the new filled function algorithm is effective and efficient for solving unconstrained global optimization problems.
5. An Application of the Filled Function Algorithm
In this section, the proposed filled function algorithm is applied to the supply chain problem. Supply chain problems can be divided into three types, namely manufacturer’s core supply chain, supplier core supply chain and seller core supply chain. In this paper, we mainly consider the manufacturer’s core supply chain. For the manufacturer’s core supply chain, there are multiple suppliers, multiple shippers, multiple generalized transportation methods, multiple sellers and one manufacturer. In this supply chain, the manufacturer uses different raw materials to produce various products that are sold by multiple sellers. The optimization objective of the supply chain is to minimize the total transportation cost.
We suppose there is a supply chain with a manufacturer as the core, one supplier and one kind of raw material required for production. The unit raw material cost of this kind of raw material supplied by the supplier is 2000 USD/t, the maximum supply is 5000 t, and all shippers can deliver it. The manufacturer produces only one product and requires 1.2 t of raw material per ton of product. There are two shippers, both of which can provide services of two generalized modes of transport. There are three sellers, and the order quantity of each seller must be strictly satisfied. The manufacturer initially has no inventory products, the production cost per unit product is 1000 USD, and the maximum production capacity is 4500 t. The relevant unit costs are shown in
Table 16,
Table 17 and
Table 18.
The optimization model used here is:
The constraints are:
where:
where
${x}_{ijnk}$ are nonnegative integers,
$k=1,2,3;j=1,2;n=1,2;{\beta}_{1jnl}$ are nonnegative numbers. The symbols used in the model are explained as follows:
$ZC$: Total supply chain cost;
${x}_{ijnk}$: The number of $i$th product delivered to the $k$th seller use the $n$th generalized transportation method by the $j$th transporter;
${\beta}_{ijnk}$: The ratio of $j$th transporter using the $n$th generalized transportation method from the $l$th supplier to the $r$th raw material to the manufacturer’s demand for the kind of raw material.
It can be seen from the model that the objective function is nonlinear, and the constraints of suppliers and the transportation of raw materials are also nonlinear, so the model is a nonlinear mixed integer programming model.
We applied the proposed filled function algorithm to this supply chain model to optimize the total transportation cost. We used MATLAB2021b programming on a 64bit Windows 10, Intel(R) Core(TM) i59400F CPU@2.90 GHz memory personal computer to calculate it, we executed 20 independent runs, and the results are listed in
Table 19.
Table 19 shows the optimization results of nonzero variables (the values of other variables are all 0). We can see that this supply chain has multiple optimal solutions. By careful observation, we can see that when
${x}_{1121}$,
${x}_{1122}$,
${x}_{1222}$,
${\beta}_{1111}$ and
${\beta}_{1211}$ are fixed to the values shown in
Table 19, and
${x}_{1113}$ and
${x}_{1213}$ satisfy the following conditions:
Therefore, the minimum transportation cost of this example is USD 1171.8 million. In this case, ${x}_{1122}=200$ means that the manufacturer in this supply chain should arrange shipper 1 to deliver 200 t of the product to the second seller using the second generalized mode of transportation. ${\beta}_{1211}=0.444$ means that the manufacturer should let shipper 2 complete 44.4 percent of the transportation task using generalized transportation method 2.
We compare our results with the results from ref. [
20], the proposed filled function algorithm in this paper finds multiple optimal solutions and takes less computational time. From
Table 19, we can see that our algorithm successfully finds five optimal solutions while the algorithm in [
20] only finds single optimal solution
${x}^{*}=$ (0, 0, 800, 0, 1000, 200, 0, 0, 0, 0, 1000, 0),
${\beta}^{*}=(0.556,0,0.444,0)$. Moreover, the average running time of our algorithm is 1106 s, while the running time of [
20] is 5128 seconds. Therefore, we can come to the conclusion that the filled function algorithm in this paper is more effective and efficient.