Next Article in Journal
A Study on Graph Centrality Measures of Different Diseases Due to DNA Sequencing
Next Article in Special Issue
Filamentation of a Hollow Gaussian Beam in a Nonlinear Optical Medium
Previous Article in Journal
Acyclic Complexes and Graded Algebras
Previous Article in Special Issue
A Hybrid Full-Discretization Method of Multiple Interpolation Polynomials and Precise Integration for Milling Stability Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Joint Optimization Algorithm Based on the Optimal Shape Parameter–Gaussian Radial Basis Function Surrogate Model and Its Application

College of Science, North China University of Science and Technology, Tangshan 063210, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3169; https://doi.org/10.3390/math11143169
Submission received: 12 May 2023 / Revised: 2 July 2023 / Accepted: 17 July 2023 / Published: 19 July 2023
(This article belongs to the Special Issue Numerical Analysis and Optimization: Methods and Applications)

Abstract

:
We propose a joint optimization algorithm that combines the optimal shape parameter–Gaussian radial basis function (G-RBF) surrogate model with global and local optimization techniques to improve accuracy and reduce costs. We analyze factors that affect the accuracy of the G-RBF surrogate model and use the particle swarm optimization (PSO) algorithm to determine the optimal shape parameter and control the number and spacing of the sampling points for a high-precision surrogate model. Global optimization refines the surrogate model, serving as the initial value for local optimization to further refine the problem. Our experiments show that this method significantly reduces computation costs. We optimize the section size of cantilever beams for different materials, obtaining the optimal section size and mass for each. We find that hard aluminum alloy is the optimal choice, meeting yield strength and deflection requirements through finite element analysis verification. Our work highlights the effectiveness of the joint optimization algorithm based on the surrogate model, providing valuable tools and insights into optimizing various structures.

1. Introduction

In practical systems, physical models may be insufficient to describe system behavior due to external factors and error terms. Response surface methodology (RSM) [1] is a modeling approach that explores nonlinear relationships between input and output variables and models, predicts, and optimizes the objective function. However, the objective function is often complex, nonlinear, and difficult to solve using analytical methods. Therefore, constructing a response surface model that approximates the objective function using experimental data can help engineers and scientists better understand practical systems, predict unknown behavior, and make scientific decisions. Different methods can be used to construct response surface models, including polynomial regression, moving least squares, Kriging, and radial basis function (RBF) methods. Polynomial regression [2] is simple and easy to use and can handle most nonlinear problems but can face difficulties in addressing high-dimensional problems. Moving least squares (MLS) [3] is simple in form and computationally efficient but it has poor approximation ability for nonlinear models. Kriging [4] is a widely used approximation method for handling linear and nonlinear problems with high prediction accuracy, but it has high computational costs and is unsuitable for discrete output variables. RBF [5] has several advantages, including a simple form, isotropy, dimension independence, and suitability for high-dimensional problems. This paper adopts the G-RBF as the approximation function to construct a high-precision surrogate model. The selection of shape parameters directly affects the interpolation effect when using the G-RBF for interpolation [6,7,8]. Nevertheless, the selection of shape parameters is mostly achieved using trial-and-error or empirical methods, resulting in low efficiency or accuracy. Hence, this paper employs the particle swarm algorithm to select the shape parameters for test functions of different dimensions. Comparative experiments are performed to verify that the selected shape parameters can achieve high interpolation accuracy.
In the field of multidisciplinary design optimization, direct optimization utilizing high-precision models becomes infeasible when the complexity of the models expands. Frequently, a response surface model is created as a low-precision model to iteratively optimize and decrease the frequency at which the high-precision model is executed, ultimately decreasing expenses. Although some approaches can enhance the precision of the approximate model, they may not be as effective in reducing the high-precision model’s frequency of execution. In contrast, techniques that incorporate optimized solutions into the approximate model could cause the approximate model to converge locally and lose global optimality [9,10]. Our proposed method aims to regulate the number of times the high-precision model is run during the model approximation process. In particular, our global optimization method optimizes the approximate model’s solution and utilizes it as the starting point for the local optimization of the high-precision model. This guarantees that the solution will be valid within the feasible domain. The global optimization method’s ability to search on a global scale enables it to discover superior initial conditions, whereas the quick convergence rate of the local optimization method can minimize the high-precision model’s frequency of execution and guarantee solution reliability. Experimental testing provides verification of the efficacy of the joint optimization algorithm.
Cantilever beams are extensively used in the automatic feeding devices of single-machine multi-station stamping systems and their structural parameters require optimized designs to ensure higher stability and economic benefits under specific working conditions [11]. However, the choice of optimization methods significantly impacts the final optimization results. Although numerical optimization methods such as the gradient method, simplex method, and direct search method are highly efficient, they often yield locally optimal solutions, making it challenging to achieve a globally optimal solution [12,13,14]. On the other hand, optimization methods like colony optimization and the genetic algorithm possess good global optimization capabilities but come with high computational costs [15,16]. This study proposes a joint optimization algorithm based on the G-RBF surrogate model with optimal shape parameters to achieve the lightest weight of the cantilever beam structure of automatic feeding devices. Our algorithm achieves a globally optimal solution while significantly reducing the frequency of high-precision model calls, thus saving computational costs. Our work contributes to material design by providing a practical and efficient optimization approach for cantilever beam structures in automatic feeding devices, which can enhance their stability and economic benefits under specific working conditions.

2. G-RBF Surrogate Model

2.1. G-RBF Interpolation Principle

The radial basis function consists of a weighted sum of radial basis functions centered at different points in the input space. In our study, we used a Gaussian radial basis function, and the centers were chosen to be the training samples. The weights of the radial basis functions were determined by solving a linear system of equations that enforced the interpolation or approximation of the training data. This expression is shown below [7]:
y = k = 1 n ω k ( ϕ k ( x x k ) ) = W T Φ ( x )
The weight coefficient vector, represented by  W = ω 1 , ω 2 , , ω n T , can be calculated using the following formula:
W = A 1 y
Matrix A can be computed using the following formula:
A = φ ( x 1 x 1 ) φ ( x 1 x n ) φ ( x n x 1 ) φ ( x n x n )
where  φ ( x x 1 )  is a radial basis function. A common choice for  φ  is the Gaussian radial basis function (G-RBF), which can be expressed as follows:
φ ( r ) = e c 2 r 2
In Equation (4), c is the shape parameter of the Gaussian radial basis function.
In two-dimensional space, the G-RBF can be expressed as [17]:
G a u s s = exp x μ x 2 2 δ x 2 y μ y 2 2 δ y 2
In three-dimensional space, the Gaussian radial basis function can be expressed as [17]:
G a u s s = exp x μ x 2 2 δ x 2 y μ y 2 2 δ y 2 z μ z 2 2 δ z 2
In Equations (5) and (6),  x , y , z  is the coordinate of the center of the grid;  ( μ x , μ y , μ z )  is the center coordinate of the radial basis function; and  δ x δ y , and  δ z  are the distribution radii of the radial basis function. The radial basis function is used to approximate the underlying function of the dataset, which enables us to make predictions for new, unseen data points. By using the radial basis function, we are able to capture the nonlinear relationships between the input variables and the output variable, which cannot be achieved with linear models.

2.2. Factors Affecting the Effectiveness of the G-RBF Surrogate Model

There are two main factors that impact the G-RBF interpolation effect, which are as follows [7]:
  • Shape parameter:
The Gaussian radial basis function (G-RBF) interpolation method has been widely used in various fields due to its high accuracy and efficiency. However, the choice of the shape parameter, c, is critical to the success of the G-RBF interpolation, as it directly affects the shape of the basis function. As shown in Figure 1, different values of c can result in significantly different shapes of the G-RBF. Furthermore, an optimal value of c, denoted as  c o p t , exists for each unique interpolation problem, which can greatly improve the accuracy of the interpolation results.
To demonstrate the importance of selecting an appropriate value for c, we conducted an experiment on the interpolation of Equation (16) using the G-RBF method with different values of c. As shown in Figure 2, the interpolation error is highly dependent on the choice of c, with an optimal value of  c o p t  that minimizes the interpolation error. These findings are consistent with previous studies, highlighting the critical role played by the shape parameter c in the G-RBF interpolation method.
Therefore, selecting an appropriate shape parameter c is crucial for obtaining accurate and reliable G-RBF interpolation results. Future studies could explore robust methods for determining the optimal value of c in various interpolation problems and further investigate the performance of the G-RBF interpolation method in different applications.
  • Conditional number:
Determining the value of  A 1  in Equation (16) is crucial for obtaining the interpolation function. Although A is non-singular when the sampling points are dissimilar, its condition number can become exceedingly large. The condition number of A is defined as
c o n d A = A 1 · A
To calculate  c o n d A , one needs to evaluate  λ max  and  λ min , which are the maximum and minimum eigenvalues of A. The formula is given by
c o n d A = λ max / λ min
When the sampling points are near, A is a matrix that is often considered poorly conditioned. Figure 3 illustrates the relationship between the matrix condition number and the number of sampling points used for the G-RBF interpolation of Equation (16). It can be observed that as the number and distribution of the sampling points increase, the matrix condition number becomes more ill-conditioned. The problem posed by a poorly conditioned matrix is that imperfections in the solution of linear equations can significantly affect the precision of the approximation.
Generally, there are two approaches to handling a poorly conditioned matrix: improving the distance between the sampling points or utilizing a more precise equation. However, increasing the number of sampling points can exacerbate the equation-solving challenge to the extent that an accurate solution becomes practically unattainable, greatly impairing model accuracy.
With regard to the two main factors that affect the G-RBF interpolation effect mentioned above, we appropriately set them in Section 2.3.1 and Section 2.4, respectively. This results in a high-precision G-RBF surrogate model.

2.3. Optimal Shape Parameter Determination in G-RBF Interpolation

2.3.1. Optimal Shape Parameter Determination Based on Particle Swarm Optimization in G-RBF Interpolation

The selection of the shape parameters in G-RBF interpolation can be transformed into the following optimization problem:
E max c = max x a , b | s x , c f x |
F i n d c o p t min E max c
The parameters of the interpolation include the interpolation basis function  s x , c , the primitive function  f x , the maximum error  E max c , and the required optimal shape parameter  c o p t  for the interpolation.
Particle swarm optimization (PSO) [18] is a population-based optimization algorithm inspired by the social behavior of birds. PSO maintains a population of particles, each representing a potential solution to the optimization problem. Each particle has a position and velocity in the search space, and the algorithm updates these values iteratively to search for the optimal solution.
The position of each particle at iteration t is represented by a vector  x i ( t ) , and its velocity is represented by a vector  v i ( t ) . The best position that particle i has achieved so far is denoted by  p i ( t ) , and the best position achieved by any particle in the swarm is denoted by  g ( t ) . The objective function to be optimized is denoted by  f ( x ) .
At each iteration, the algorithm updates the position and velocity of each particle based on its current position, velocity, and the best positions achieved by both itself and the swarm as a whole. The position and velocity updates are given by:
v i ( t + 1 ) = w v i ( t ) + c 1 r 1 ( p i ( t ) x i ( t ) ) + c 2 r 2 ( g ( t ) x i ( t ) )
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 )
where w is the inertia weight, which controls the particle’s tendency to maintain its current velocity.  c 1  and  c 2  are the cognitive and social learning factors, respectively, which control the particle’s tendency to follow its personal best position and the swarm’s best position.  r 1  and  r 2  are random numbers drawn from a uniform distribution.
The algorithm terminates when a stopping criterion, such as a maximum number of iterations or a desired level of convergence, is met. The best solution found by the algorithm is the particle with the best position achieved throughout the iterations.
In summary, PSO is a population-based optimization algorithm that updates the position and velocity of each particle based on its current position, velocity, and the best positions achieved by both itself and the swarm as a whole. By emulating the social behavior of birds, PSO is capable of efficiently searching for the optimal solution to complex optimization problems.
This paper employs PSO to optimize the shape parameters of G-RBF interpolation. This technique is particularly advantageous when dealing with complex systems that have a large amount of data to model using the G-RBF proxy. It provides a significant time reduction for parameter determination. G-RBF interpolation requires the determination of the optimal shape parameter  c o p t  to minimize the interpolation error. The process is as follows:
Step 1: Initialize the particle swarm
Assuming that the particle swarm consists of n particles, the position and velocity of each particle can be denoted by  x i  and  v i , respectively. The initial position and velocity are generated randomly and typically follow a uniform or Gaussian distribution.
Step 2: Determine the fitness value.
The position of each particle represents a viable design solution. Based on the current position, the objective function value indicated by the maximum absolute error is used as the particle’s fitness value  f i .
Step 3: Update particle speed and position.
The velocity ( v i t ) and position ( x i t ) of the particles are updated during the t-th iteration as follows:
v i t = w v i t + c 1 r 1 ( x p b e s t i x i t ) + c 2 r 2 ( x g b e s t x i t )
x i t + 1 = x i t + v i t
Here, the inertia weight (w) regulates the particle motion’s inertia. The acceleration coefficients ( c 1  and  c 2 ) determine the impact of the particle’s individual historical best position ( x p b e s t i ) and the global best position ( x g b e s t ) on the particle’s velocity. The random numbers ( r 1  and  r 2 ) create a stochastic element in the calculation.
Step 4: Update the optimal value
Following each iteration, the historical and globally optimal values of every particle are updated. For the ith particle, if the current position generates a fitness value  f i , which is less than its historical optimal fitness value  f p b e s t i p b e s t i = x i  and  f p b e s t i = f i  will be updated.  g b e s t  is updated with the position that has the lowest fitness value among all particles.
Step 5: Convergence Judgment
It is determined whether the particle swarm has converged, meaning that the position of the particle has barely changed or not changed at all. If the iteration stop condition is met, the globally optimal value and position are output. Consequently, the algorithm ends. If not, the algorithm returns to Step 3 for another iteration. To ensure the reproducibility of our results, we have provided detailed information on the specific parameters and settings used in the PSO algorithm. The PSO algorithm was used to optimize the shape parameter and manage the sampling points. The swarm size was set to 50, the maximum number of iterations was set to 500, the inertia weight was set to 0.8, and the cognitive and social parameters were set to 1.5. The constriction factor was set to 0.729 to ensure the convergence of the algorithm.
In conclusion, the PSO algorithm outputs the optimal shape parameter  c o p t  of the G-RBF based on the calculated results. Following experimental research by numerous researchers [19,20,21,22,23,24,25,26], it has been confirmed that the method proposed in [27] demonstrates high accuracy in selecting the shape parameters of radial basis function interpolation. Therefore, it has been widely utilized. In this section, this study selected seven test functions from [27,28,29] (Table 1 and Table 2) and calculated them under the same conditions specified in Section 2.3.1, comparing the experimental results to those obtained using the method in [27] (Table 3).
Based on the experimental results in Table 3, the PSO algorithm was found to be effective in determining the optimal shape parameters of G-RBF interpolation. Specifically, our method significantly reduced the operational cost and improved the accuracy of interpolation, particularly for  f 5 .

2.4. Control of Sampling Points

Section 2.2 examines how sampling points affect the accuracy of the G-RBF surrogate model, and we regulate it with regard to the selection, number, and spacing of sampling points.
First of all, the selection of sampling points is a crucial step in the research methodology and must be carried out according to specific principles. It is imperative that the sampling points chosen for this study are representative of the entire space, as the accuracy of the approximate model heavily relies on this factor. Given the unique properties of the Latin hypercube method [30], it is the preferred approach for extracting information from the entire space and is therefore utilized in this study.
Moreover, the addition of sample points needs to be regulated. Once the model attains a specific accuracy level, there is no need to continue adding sample points. Here is the particular regulation method: the accuracy threshold is established as  θ  for the proxy model. If the root mean square error (RMSE) is less than  θ , there is a need to increase the sample point. On the other hand, it is unnecessary to add sample points if the RMSE is greater than or equal to  θ . Moreover, the number of sampling times is restricted to a maximum value of N.
Finally, it is critical to regulate the spacing between sampling points. As described in Section 2.2, having sampling points that are too close may result in an ill-conditioned matrix A, hence the need to regulate the spacing between added points. In some cases, it is also essential to ensure that the distance between points is not too close since, at times, this could inadequately represent the system’s entire information, thus negatively affecting the global optimization outcomes. Moreover, a program may end too early when an addition point is too close to an existing sampling point, thus failing to meet the precision requirements. As a result, it is crucial to filter and remove points that are too close after adding them. The current sampling points are labeled as  S e = x 1 , , x j , , x s . The collection of points added after them is marked as  S a = x 1 , , x i , , x n . Since the shape parameter c mainly determines matrix element values and plays a significant role in assessing points’ proximity degree, the criterion for governing sampling point spacing, after the determination of  c o p t  using the method described in Section 2.3.1, is whether  d i = x i x j > c o p t  satisfies a specific condition for each  x i  in  s a .

3. Joint Optimization Algorithm Based on G-RBF Surrogate Model

3.1. Main Idea of the Joint Optimization Algorithm

One of the key advantages of global optimization algorithms is their ability to escape the limitations of locally optimal solutions. Although they possess strong search capabilities, their convergence speed is relatively slow, necessitating multiple high-precision model calculations. Alternatively, local optimization methods demonstrate faster convergence rates but are prone to reaching locally optimal solutions, particularly when a poor initial value is selected. In response to these issues, the joint optimization method utilizes the global search capabilities of the former coupled with the faster convergence rate of the latter to ensure optimization convergence, minimize high-precision model calculations, and reduce calculation costs.
Initially, data are utilized to determine the optimal shape parameters using the method described in Section 2.3.1. Then, a surrogate model using G-RBF is constructed. Subsequently, the global optimization method is used to optimize the model, with the optimization cost being very small due to the simplicity of the G-RBF surrogate model. Lastly, the optimized variable value is taken as the initial value, and a local optimization algorithm is subsequently used to optimize the high-precision surrogate model to ensure the accuracy of the optimized solution.

3.2. Process of the Joint Optimization Algorithm

This section first provides specific details regarding the control thresholds for various parameters and the error evaluation method for the joint optimization process. The accuracy threshold for the surrogate model is represented by  θ , the maximum number of sampling times is N, and the inter-sample distance is controlled by  c o p t . The evaluation of errors is computed using the root-mean-square-error (RMSE) method expressed as:
R M S E = i = 1 N y i y ˜ i N
The following section outlines the specific steps involved in the joint optimization algorithm based on the G-RBF surrogate model with an optimal shape parameter.
Step 1: The entire system is sampled using Latin hypercube, followed by the selection of the optimal shape parameters based on the sampling points using the method described in Section 2.3.1.
Step 2: The surrogate model undergoes precision verification, where the existence of  R M S E > θ  warrants the further addition of points. According to the description in Section 2.4, validation procedures are necessary for each addition. Newly added point  x i  must satisfy the condition  d i = x i x j > c o p t  relative to the preexisting point  x j . Thus, the addition of each point is either valid or invalid, and the invalid point must be re-added until  R M S E < θ  is achieved or the maximum number of sampling times reaches N.
Step 3: The G-RBF surrogate model obtained can be optimized using global optimization techniques to determine the optimal solution. The use of a sampling strategy that ensures that points are well spaced allows the surrogate model to represent global information. Therefore, the optimal solution obtained through global optimization techniques is likely to be very close to the optimal solution of the original problem.
Step 4: The global optimization solution serves as an initial value submitted to the local optimization method to further optimize the high-precision G-RBF surrogate model, resulting in the optimal solution of the joint optimization algorithm.
The joint optimization algorithm is presented in the form of a flowchart in Figure 4.

3.3. Testing the Joint Optimization Algorithm

In this section, we test the effectiveness of the G-RBF surrogate-based joint optimization algorithm using two two-dimensional test functions. In practical applications, it is important to avoid excessive calls to high-precision models. Therefore, when comparing the global optimization algorithm, local optimization algorithm, and joint optimization algorithm, the maximum absolute error (MaxError) was chosen as the measure of error for the design variables. The MaxError in function values ( E y ), the MaxError in variable values ( E x ), and the number of G-RBF surrogate model calls (T) are compared.
  • Test case 1: Ackley Function
The Ackley Function [31] is a multimodal function with multiple local minima and a very complex landscape. It is defined as follows:
f ( x , y ) = 20 exp 0.2 1 2 ( x 2 + y 2 ) exp 1 2 [ cos ( 2 π x ) + cos ( 2 π y ) ] + e + 20
Here, x and y are the independent variables that lie in the range  5 x , y 5 .
The optimal solution and value refer to the minimum point and value, respectively, of the function. The optimal solution for the Ackley Function is  ( 0 , 0 ) , where the function reaches its minimum value of  f ( 0 , 0 ) = 0 . Optimization algorithms can be employed to find the optimal solution. This study employed a genetic algorithm as the global optimization algorithm and a gradient descent method as the local optimization algorithm.
A genetic algorithm (GA) is a population-based optimization algorithm that mimics the process of natural selection to search for the optimal solution. The algorithm maintains a population of candidate solutions, represented by a set of chromosomes, and iteratively applies genetic operators, such as selection, crossover, and mutation, to generate new offspring. The fitness of each individual is evaluated based on the objective function, and the population evolves toward better solutions over time. The GA terminates when a stopping criterion, such as a maximum number of generations, is met or when the optimal solution is found.
The gradient descent method is an iterative optimization algorithm that seeks to minimize a function by iteratively adjusting the parameters in the direction of the negative gradient of the function. At each iteration, the algorithm computes the gradient of the function with respect to the current parameter values and updates the parameters in the direction of the negative gradient. The step size of the update is controlled by a learning rate parameter, which determines the size of the step taken in each iteration. The algorithm terminates when a stopping criterion, such as a desired level of convergence or a maximum number of iterations, is met.
In this study, the GA was employed as the global optimization algorithm to search for the optimal set of design variables, whereas the gradient descent method was used as the local optimization algorithm to refine the solution obtained using the GA. Specifically, the GA was used to generate a set of candidate solutions, and the best solution was selected as the initial point for the gradient descent method. The gradient descent method was then employed to fine-tune the solution by iteratively adjusting the design variables in the direction of the negative gradient of the objective function. The process was repeated until convergence was achieved or a maximum number of iterations was reached.
In summary, the combination of the genetic algorithm and gradient descent method allowed for efficient global and local optimization, respectively, and enabled the algorithm to search for the optimal solution in a large search space. A population size of 30 and 100 generations was used. The initial values for the gradient descent method were set to [−5, 0].
The best shape parameter  c o p t = 1.668  for the G-RBF interpolation of (16) was determined using the method described in Section 2.3.1. The RMSE threshold for the surrogate model was set to  θ = 1 × 10 6 , and Figure 5 and Figure 6 show the RMSE of the G-RBF surrogate model and the distribution of sampled points.
The distribution of the sampling points, including the added points, appeared to be fairly uniform across the global range. This suggests that the controlled refinement strategy in Section 2.4 was effective in ensuring that the surrogate model adequately captured the global characteristics. This ensures that our joint optimization algorithm can find the global optimal solution.
The convergence of the genetic algorithm optimization of the original objective function and our proposed joint optimization algorithm are compared in Figure 7 and Figure 8.
Table 4 compares the optimization error and the number of function evaluations for the objective function (16).
As the gradient descent method exhibited significantly larger errors compared to the other two algorithms, we do not discuss it in detail here. In contrast, both the genetic algorithm and joint optimization methods converged to the exact solution, as demonstrated in Figure 7 and Figure 8 and Table 4. However, there were differences in their convergence behaviors. The joint optimization method converged more swiftly to the vicinity of the exact solution during the iteration thanks to decent initial values. In contrast, the genetic algorithm required considerably more operations of the high-fidelity G-RBF surrogate model before converging to the solution. These results highlight the effectiveness of the joint optimization method in reducing computation costs and accelerating convergence while ensuring solution accuracy.
  • Test case 2: Rosenbrock’s Banana Function
Rosenbrock’s Banana Function [32] is a well-known constrained optimization problem, which is frequently utilized for evaluating optimization algorithms’ performance. The function can be formulated as minimizing the Equation:
f ( x , y ) = ( 1 x ) 2 + 100 ( y x 2 ) 2 , x + y 2
where x and y are bounded by the range  [ 2 , 2 ] . The optimal solution to this problem is achieved when  x = 1  and  y = 1 , at which point the objective function attains the minimum value of  f ( x , y ) = 0 . We employed the ant colony algorithm as our global optimization method and the simplex method as the local optimization in our study.
The ant colony algorithm (ACA) is a metaheuristic optimization algorithm inspired by the behavior of ants in finding the shortest path between their colony and food sources. The algorithm maintains a set of artificial ants, each representing a potential solution to the optimization problem. The ants construct a pheromone trail based on the quality of the solutions they encounter, and the pheromone trail guides subsequent ants toward better solutions. The algorithm terminates when a stopping criterion, such as a maximum number of iterations or a desired level of convergence, is met. The best solution found by the algorithm is the one with the highest pheromone trail.
The simplex method is an iterative optimization algorithm that seeks to minimize a linear objective function subject to linear constraints. The algorithm starts with a feasible solution and iteratively moves toward the optimal solution by adjusting the variables within the constraints. At each iteration, the algorithm identifies a non-basic variable that can improve the objective function value and moves along the direction of the corresponding constraint boundary until a new basic feasible solution is found. The algorithm terminates when a stopping criterion, such as a desired level of convergence or a maximum number of iterations, is met.
In this study, we employed the ACA as the global optimization algorithm and the simplex method as the local optimization method. Specifically, the ACA was used to explore the search space and identify promising regions for the optimization problem. The best solution found by the ACA was then used as the initial point for the simplex method, which iteratively improved the solution by adjusting the design variables within the constraints. The process was repeated until convergence was achieved or a maximum number of iterations was reached.
In summary, the combination of the ant colony algorithm and simplex method allowed for efficient global and local optimization, respectively, and enabled the algorithm to search for the optimal solution in a large search space subject to linear constraints.
The optimal parameter is  c o p t = 1.001 , with an RMSE threshold of  θ = 1 × 10 6 . Figure 9 and Figure 10 illustrate the convergence process of the ant colony and joint optimization algorithms. Table 5 compares the results obtained using the three optimization algorithms.
The solution obtained using the simplex method was far from optimal so we do not discuss this in detail here. Table 5 and Figure 9 and Figure 10 show that both the ant colony and joint optimization algorithms converged to the optimal solution. It should be noted that the joint optimization algorithm needs to run the Gaussian radial basis function surrogate model only one-fifth of the times required by the ant colony algorithm. This further supports the effectiveness of the algorithm proposed in this study.

3.4. Feasibility of the Joint Optimization Algorithm

Our joint optimization algorithm employs three technologies: a high-precision G-RBF surrogate model with an optimal shape parameter, global optimization, and local optimization. In order to ensure uniformity in space, express the whole-space characteristics, and bring the solution of the global optimization algorithm closer to the actual solution, the control-plus-point strategy is employed in the sampling distribution. The convergence of the method is guaranteed by the existing global optimization algorithms. Hence, the joint optimization algorithm is deemed feasible.
From the two classic test functions, it is evident that the joint optimization algorithm can efficiently attain global optimization goals. Our optimization technique significantly reduces the number of high-precision model calls, thereby enhancing the method’s effectiveness and reducing costs in real-world applications.

4. Application of the Joint Optimization Algorithm in Cantilever Beam Design

4.1. Optimal Design Model of Cantilever Beam

Below is a cantilever design model from [11]. The limited operational space among stations in single-machine, multiple-workstation stamping systems, and the relatively large size of cantilever beams make it particularly essential to select materials reasonably and design a size that is rational. In addition, the length of the cantilever beam is dictated by the requirements and constraints of practical situations. Thus, the U-shaped section dimensions of the cantilever beam, as shown in Figure 11, can be optimized. The primary variables include the cantilever width B (in meters), height H (in meters), thickness d (in meters), and length L (in meters) of the cantilever beam. Additionally, auxiliary variables such as h e 1 e 2 , and b are introduced for the convenience of subsequent computation. In this context, s represents the section center of gravity, whereas x denotes the x-axis location of the center of gravity.
The objective function represents the desired objective to be achieved during the design process and is expressed as a functional relationship between the design variables. Considering the cross-sectional parameters of the cantilever beam, the design variables are determined as follows:
X = x 1 , x 2 , x 3 = [ B , H , d ]
Based on the structural design requirements of the cantilever beam, the optimization objective is to minimize the mass of the U-shaped cantilever beam while satisfying the relevant requirements. Therefore, the objective function is (19). In the given context,  f ( · )  represents the mass of the cantilever beam measured in kg, with the material density denoted by  ρ  in (kg·m 3 ).
min f x 1 , x 2 , x 3 = ρ × x 1 x 2 x 2 x 3 x 1 2 x 3 × L

4.2. Constraint Condition

1. The performance constraint function
Firstly, to ensure satisfactory performance, a constraint function is proposed. The maximum deflection  f v  of the cantilever beam is determined by the design requirements of the gripping structure of the cantilever beam in this paper and is obtained by combining the mechanics of materials using the flexure formula as follows:
f v = P g L 3 3 E I v [ f ]
P = 2 m 1 + m 2 + m 3 I x = B e 1 3 b h 3 + 2 d e 2 3 3 e 1 = 2 d H 2 + b d 2 2 ( 2 d H + b d ) e 2 = H e 1
In Equations (20) and (21), P is the mass of the workpiece in kilograms; b represents the inner width of the U-shaped cantilever beam; E is the elastic modulus of the material;  [ f ]  denotes the maximum allowable displacement under actual working conditions, which is set at  5 × 10 3  m;  I x  is the moment of inertia of the U-shaped cantilever beam along the x-axis; h is the height from the center of the U-shaped cantilever beam’s gravity to the ground in meters;  e 1  and  e 2  represent the distances in meters from the center of the U-shaped cantilever beam’s gravity to the edges; and g is the gravitational acceleration, which is set at 10 m/s 2 . The cantilever beam is 0.5 m long, and the masses of the suction cup and its support, the sensor and its support, and the workpiece are 0.03 kg, 0.05 kg, and 0.2 kg, respectively.
2. The boundary constraint function
The boundary constraint function restricts the dimensions of the cantilever beam’s cross-section, as shown in Equation (22), with units of meters, to prevent interference with the mold due to spatial limitations between the actual working conditions and workstations.
3.5 × 10 2 x 1 5 × 10 2 6 × 10 3 x 2 1 × 10 2 1.5 × 10 3 x 3 4 × 10 3
Based on real-world conditions and operational constraints, we analyzed three commonly used materials for cantilever beam materials, namely 45 steel, rolled bronze, and hard alloy aluminum. The material properties are presented in Table 6.

4.3. Solving the Cantilever Beam Design Mathematical Model Using the Joint Optimization Algorithm Based on the G-RBF Surrogate Model

Based on the above objective function, constraints, and relevant data, the mathematical model for optimizing the mass of the cantilever beam is derived and is shown in Equation (23).
min f ( x ) = f x 1 , x 2 , x 3 = ρ × x 1 x 2 x 2 x 3 x 1 2 x 3 × L g 1 ( x ) = f v = P g L 3 / 3 E I x [ f ] 0 g 2 ( x ) = x 1 + 3.5 × 10 2 0 g 3 ( x ) = x 1 5 × 10 2 0 g 4 ( x ) = x 2 + 6 × 10 3 0 g 5 ( x ) = x 2 1 × 10 2 0 g 6 ( x ) = x 3 + 1.5 × 10 3 0 g 7 ( x ) = x 3 4 × 10 3 0
In order to fulfill the requirements of a lightweight design, three different materials (as shown in Table 6) were chosen for the objective optimization of the cross-sectional dimensions of a single cantilever beam using the joint optimization algorithm detailed in Section 3.2. The optimal shape parameter of the G-RBF surrogate model selected using Equation (10) in Section 2.3.1 is  c o p t = 2.135 , with an accuracy threshold  R M S E = 1 × 10 6 . The global optimization was performed using the genetic algorithm, whereas the local optimization was performed using the direct search method. The genetic algorithm parameters used were as follows: population size (NP) = 20, migration probability (F) = 0.4, crossover probability (CF) = 0.8, default fitness function value deviation of  1 × 10 6 , genetic iteration number (N) = 100, and the remaining parameters were maintained at their default values. Table 7 presents a comparative analysis of the final optimization results obtained using the joint optimization algorithm for three materials, namely 45 steel, rolled bronze, and hard aluminum alloy.
It is evident that under the constraints of nonlinear conditions, the optimal solutions for different materials vary. Under the requirement of bending deformation, the maximum mass of a cantilever beam was achieved using rolled bronze at 0.7605 kg. In contrast, the lightest mass was obtained using hard aluminum alloy at 0.1943 kg. The comparative results in Table 7 indicate that when the material for the cantilever beam was hard aluminum alloy, the weight of the automatic feeding device was at its lightest.
This study employed a genetic algorithm as the global optimization algorithm and a direct search method as the local optimization algorithm. The direct search method is a derivative-free optimization algorithm that seeks to minimize an objective function without using gradient information. The algorithm iteratively searches the vicinity of the current solution by evaluating the objective function at a set of candidate points and updates the current solution based on the best candidate point found. The algorithm terminates when a stopping criterion, such as a desired level of convergence or a maximum number of iterations, is met.
The results of the experiments indicated that both the genetic algorithm and the proposed joint optimization method achieved convergence to the optimal solution, whereas the direct search method fell short. The performance of the three optimization algorithms when running the G-RBF proxy model during various material optimization processes is presented in Table 8. The results demonstrate that the joint optimization algorithm exhibits heightened performance.
Based on the joint optimization algorithm process, the optimal solutions for 45 steel, rolled bronze, and hard aluminum alloy were obtained, as depicted in Figure 12, Figure 13 and Figure 14. It can be observed that given the global algorithm provided good initial values for the joint optimization method, all materials converged to the optimal solution at a lower computational cost.
According to Table 7, hard aluminum alloy was the lightest material for the cantilever beam, with a weight of 0.1943 kg. To confirm that the hard aluminum alloy cantilever beam was suitable for its intended working conditions, the grasping structure of a single cantilever beam was analyzed using ANSYS 2021 R2 finite element analysis software. The cantilever beam was modeled using solid186 elements with a mesh size of 2 mm. The solid186 element is a 3D 20-node solid element with reduced integration and hourglass control, which is commonly used for structural analysis in ANSYS. The material properties of the hard aluminum alloy were defined in the material properties section of the ANSYS model, including Young’s modulus, Poisson’s ratio, and the yield strength of the hard aluminum alloy. Young’s modulus and Poisson’s ratio were obtained from the literature, and the yield strength was set to 195 MPa, which is the yield limit of hard aluminum alloy. The cantilever beam was fixed at one end and subjected to a concentrated load of 100 N at the free end. The analysis was performed using ANSYS’s static structural module, which utilized the finite element method to solve the equations governing the structural behavior of the cantilever beam. The convergence criterion for the simulation was set to a maximum displacement change of 0.01 mm per iteration. The test results indicated that the maximum displacement of the cantilever beam was approximately 0.0011 m, which is far below the deflection requirement of 0.005 m. Additionally, the maximum stress experienced by the cantilever beam was 9.7 MPa, which is significantly less than the yield limit of hard aluminum alloy, which is 195 MPa. The safety factor of the optimized cantilever beam was found to be more than 20, indicating a large margin of safety. Therefore, it can be inferred that a lightweight cantilever beam made from hard aluminum alloy fulfills its design requirements for yield strength and deflection.

5. Potential Extension and Improvement of the Algorithm

In this section, we discuss potential extensions and improvements to our proposed joint optimization algorithm.
First, we suggest examining the algorithm’s applicability to more complex optimization problems, such as those involving multi-objective optimization or nonlinear constraints. For example, our algorithm could be applied to the optimization of complex structures or systems with multiple design objectives, such as weight reduction, cost minimization, and performance improvement, while satisfying multiple constraints. This will extend the applicability of the algorithm to a broader range of engineering design problems.
Furthermore, we propose exploring the integration of multiple surrogate models or optimization methods to further improve the algorithm’s performance and applicability to various engineering design problems. For instance, the combination of global and local surrogate models could be employed to achieve a better balance between accuracy and efficiency, while hybrid optimization algorithms could be used to leverage the strengths of different optimization techniques. These extensions will enhance the algorithm’s robustness and efficiency and enable it to tackle more challenging optimization problems.
We believe that these future research directions will help advance the field of surrogate-assisted optimization and further enhance the practical applicability of our proposed algorithm. By considering these extensions and improvements, the algorithm can be further developed to meet the ever-increasing demands of modern engineering design.

6. Conclusions

Our joint optimization algorithm combines global and local optimization techniques with the Gaussian radial basis function surrogate model’s approximation capabilities, achieving superior optimization results with reduced computation time. By optimizing the cross-sectional size parameters of a U-shaped cantilever beam for different materials in an automatic feeding device of a single-machine multi-station stamping system, we demonstrated the algorithm’s practical applicability, achieving a weight reduction while satisfying the yield strength and deflection constraints. The optimized cantilever beam weighed only 0.1943 kg, demonstrating the effectiveness of our joint optimization algorithm based on the surrogate model in real-world applications. Our study provides a useful reference for optimizing structural designs under complex operational conditions.

Author Contributions

Data curation, J.S.; formal analysis, J.S.; funding acquisition, D.G.; investigation, L.W.; methodology, J.S.; project administration, D.G.; resources, D.G. and L.W.; software, J.S.; supervision, D.G.; validation, L.W.; visualization, L.W.; writing—original draft, J.S. and L.W.; writing—review and editing, J.S. and D.G. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Jian Sun was supported by the National Natural Science Foundation of China (Grant No.11601151) and the Hebei Province Top-Notch Young Talents Support Program Project.

Data Availability Statement

Data is unavailable due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Panwar, V.; Sharma, D.K.; Kumar, K.P.; Jain, A.; Thakar, C. Experimental investigations and optimization of surface roughness in turning of en 36 alloy steel using response surface methodology and genetic algorithm. Mater. Today Proc. 2021, 46, 6474–6481. [Google Scholar] [CrossRef]
  2. Qiu, S.; Dooley, L.M.; Xie, L. How servant leadership and self-efficacy interact to affect service quality in the hospitality industry: A polynomial regression with response surface analysis. Tour. Manag. 2020, 78, 104051. [Google Scholar] [CrossRef]
  3. Lu, C.; Fei, C.W.; Liu, H.T.; Li, H.; An, L.Q. Moving extremum surrogate modeling strategy for dynamic reliability estimation of turbine blisk with multi-physics fields. Aerosp. Sci. Technol. 2020, 106, 106112. [Google Scholar] [CrossRef]
  4. Li, T.; Yang, X. An efficient uniform design for Kriging-based response surface method and its application. Comput. Geotech. 2019, 109, 12–22. [Google Scholar] [CrossRef]
  5. Gan, R.; Li, B.; Tang, T.; Liu, S.; Chu, J.; Yang, G. Noise optimization of multi-stage orifice plates based on RBF neural network response surface and adaptive NSGA-II. Ann. Nucl. Energy 2022, 178, 109372. [Google Scholar] [CrossRef]
  6. Cavoretto, R.; De Rossi, A.; Mukhametzhanov, M.S.; Sergeyev, Y.D. On the search of the shape parameter in radial basis functions using univariate global optimization methods. J. Glob. Optim. 2021, 79, 305–327. [Google Scholar] [CrossRef]
  7. Gu, J.; Jung, J.H. Adaptive Gaussian radial basis function methods for initial value problems: Construction and comparison with adaptive multiquadric radial basis function methods. J. Comput. Appl. Math. 2021, 381, 113036. [Google Scholar] [CrossRef]
  8. Gao, K.; Mei, G.; Cuomo, S.; Piccialli, F.; Xu, N. ARBF: Adaptive radial basis function interpolation algorithm for irregularly scattered point sets. Soft Comput. 2020, 24, 17693–17704. [Google Scholar] [CrossRef]
  9. Bhosekar, A.; Ierapetritou, M. Advances in surrogate based modeling, feasibility analysis, and optimization: A review. Comput. Chem. Eng. 2018, 108, 250–267. [Google Scholar] [CrossRef]
  10. Heaton, M.J.; Datta, A.; Finley, A.; Furrer, R.; Guhaniyogi, R.; Gerber, F.; Gramacy, R.B.; Hammerling, D.; Katzfuss, M.; Lindgren, F.; et al. Methods for analyzing large spatial data: A review and comparison. arXiv 2017, arXiv:1710.05013. [Google Scholar]
  11. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social network search for solving engineering optimization problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef] [PubMed]
  12. Chen, Y.; Miao, D. Granular regression with a gradient descent method. Inf. Sci. 2020, 537, 246–260. [Google Scholar] [CrossRef]
  13. Xu, S.; Wang, Y.; Wang, Z. Parameter estimation of proton exchange membrane fuel cells using eagle strategy based on JAYA algorithm and Nelder-Mead simplex method. Energy 2019, 173, 457–467. [Google Scholar] [CrossRef]
  14. Audet, C.; Le Digabel, S.; Tribes, C. The mesh adaptive direct search algorithm for granular and discrete variables. SIAM J. Optim. 2019, 29, 1164–1189. [Google Scholar] [CrossRef] [Green Version]
  15. Kavitha, R.; Jothi, D.K.; Saravanan, K.; Swain, M.P.; Gonzáles, J.L.A.; Bhardwaj, R.J.; Adomako, E. Ant Colony Optimization-Enabled CNN Deep Learning Technique for Accurate Detection of Cervical Cancer. BioMed Res. Int. 2023, 2023, 1742891. [Google Scholar] [CrossRef]
  16. Wang, Z.; Sobey, A. A comparative review between Genetic Algorithm use in composite optimisation and the state-of-the-art in evolutionary computation. Compos. Struct. 2020, 233, 111739. [Google Scholar] [CrossRef]
  17. Karimi, N.; Kazem, S.; Ahmadian, D.; Adibi, H.; Ballestra, L. On a generalized Gaussian radial basis function: Analysis and applications. Eng. Anal. Bound. Elem. 2020, 112, 46–57. [Google Scholar] [CrossRef]
  18. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle swarm optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  19. Trahan, C.; Wyatt, R. Radial basis function interpolation in the quantum trajectory method: Optimization of the multi-quadric shape parameter. J. Comput. Phys. 2003, 185, 27–49. [Google Scholar] [CrossRef]
  20. Sarra, S.; Sturgill, D. A random variable shape parameter strategy for radial basis function approximation methods. Eng. Anal. Bound. Elem. 2009, 33, 1239–1245. [Google Scholar] [CrossRef]
  21. Mongillo, M. Choosing basis functions and shape parameters for radial basis function methods. SIAM Undergrad. Res. Online 2011, 4, 2–6. [Google Scholar] [CrossRef]
  22. Xiang, S.; Wang, K.; Ai, Y.; Sha, Y.; Shi, H. Trigonometric variable shape parameter and exponent strategy for generalized multiquadric radial basis function approximation. Appl. Math. Model. 2012, 36, 1931–1938. [Google Scholar] [CrossRef]
  23. Farzaneh, A.; Mohsen, E. Optimal variable shape parameters using genetic algorithm for radial basis function approximation. Ain Shams Eng. J. 2015, 6, 639–647. [Google Scholar]
  24. Amirfakhrian, M.; Arghand, M.; Kansa, E. A new approximate method for an inverse time-dependent heat source problem using fundamental solutions and RBFs. Eng. Anal. Bound. Elem. 2016, 64, 278–289. [Google Scholar] [CrossRef]
  25. Shabnam, S.; Majid, A.; Tofigh, A. An algorithm for choosing a good shape parameter for radial basis functions method with a case study in image processing. Results Appl. Math. 2022, 16, 100337. [Google Scholar]
  26. Sun, J.; Wang, L.; Gong, D. Model for Choosing the Shape Parameter in the Multiquadratic Radial Basis Function Interpolation of an Arbitrary Sine Wave and Its Application. Mathematics 2023, 11, 1856. [Google Scholar] [CrossRef]
  27. Rippa, S. An algorithm for selecting a good value for the parameter c in radial basis function interpolation. Adv. Comput. Math. 1999, 11, 193–210. [Google Scholar] [CrossRef]
  28. Mullur, A.A.; Messac, A. Extended radial basis functions: More flexible and effective metamodeling. AIAA J. 2005, 43, 1306–1315. [Google Scholar] [CrossRef] [Green Version]
  29. Krishnamurthy, T. Comparison of response surface construction methods for derivative estimation using moving least squares, kriging and radial basis functions. In Proceedings of the 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Austin, TX, USA, 18–21 April 2005; p. 1821. [Google Scholar]
  30. Owen, A.B. Latin supercube sampling for very high-dimensional simulations. ACM Trans. Model. Comput. Simul. (TOMACS) 1998, 8, 71–102. [Google Scholar] [CrossRef]
  31. Cai, W.; Yang, L.; Yu, Y. Solution of Ackley Function Based on Particle Swarm Optimization Algorithm. In Proceeding of the 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Dalian, China, 25–27 August 2020; pp. 563–566. [Google Scholar]
  32. Ma, J.; Li, H. Research on rosenbrock function optimization problem based on improved differential evolution algorithm. J. Comput. Commun. 2019, 7, 107–120. [Google Scholar] [CrossRef]
Figure 1. G-RBF with different shape parameters.
Figure 1. G-RBF with different shape parameters.
Mathematics 11 03169 g001
Figure 2. c and RMSE.
Figure 2. c and RMSE.
Mathematics 11 03169 g002
Figure 3. Sampling point and condition number.
Figure 3. Sampling point and condition number.
Mathematics 11 03169 g003
Figure 4. Algorithm flow.
Figure 4. Algorithm flow.
Mathematics 11 03169 g004
Figure 5. RMSE of G-RBF surrogate model.
Figure 5. RMSE of G-RBF surrogate model.
Mathematics 11 03169 g005
Figure 6. Distribution of sampling points.
Figure 6. Distribution of sampling points.
Mathematics 11 03169 g006
Figure 7. Genetic algorithm.
Figure 7. Genetic algorithm.
Mathematics 11 03169 g007
Figure 8. Joint optimization algorithm.
Figure 8. Joint optimization algorithm.
Mathematics 11 03169 g008
Figure 9. Ant colony algorithm.
Figure 9. Ant colony algorithm.
Mathematics 11 03169 g009
Figure 10. Joint optimization algorithm.
Figure 10. Joint optimization algorithm.
Mathematics 11 03169 g010
Figure 11. U-shaped section of a cantilever beam.
Figure 11. U-shaped section of a cantilever beam.
Mathematics 11 03169 g011
Figure 12. Optimization results of 45 steel.
Figure 12. Optimization results of 45 steel.
Mathematics 11 03169 g012
Figure 13. Optimization results of rolled bronze.
Figure 13. Optimization results of rolled bronze.
Mathematics 11 03169 g013
Figure 14. Optimization results of hard alloy.
Figure 14. Optimization results of hard alloy.
Mathematics 11 03169 g014
Table 1. Test functions.
Table 1. Test functions.
f
  f 1 = x 2 8 + x 5
  f 2 = 0.75 exp 9 x 2 2 + 9 y 2 2 4 + 0.75 exp 9 x + 1 2 49 + 9 y + 1 10
  + 0.5 exp 9 x 7 2 + 9 y 3 2 4 0.2 exp 9 x 4 2 9 y 7 2
  f 3 = tanh 9 y 9 x + 1 9
  f 4 = 1.25 + cos 5.4 y 6 1 + 3 x 1 2
  f 5 = i = 1 5 3 10 + sin 16 15 x i 1 + sin 2 16 15 x i 1
  f 6 = i = 1 10 x i ( 1 + ln x i x i + x 10 )
Table 2. Parameter setting.
Table 2. Parameter setting.
fVars.Design DomainSample Points
  f 1 1   x 0 , 7 21
  f 2 2   x 0 , 1 , y 0 , 1 1000
  f 3 2   x 0 , 1 , y 0 , 1 1000
  f 4 2   x 0 , 1 , y 0 , 1 1000
  f 5 5   x i 1 , 1 80
  f 6 10   x i 0.5 , 10 200
Table 3. Comparison with the effect of the method in [27].
Table 3. Comparison with the effect of the method in [27].
f   c opt RMSEOperation Time(s)
PSO[27]PSO[27]PSO[27]
  f 1 0.3330.392   1.23 × 10 2   1.41 × 10 2 0.0151.601
  f 2 0.1720.188   2.01 × 10 3   2.55 × 10 3 0.2262.279
  f 3 1.3271.0878   4.23 × 10 3   5.33 × 10 3 1.4393.763
  f 4 0.3560.8465   1.96 × 10 4   7.45 × 10 4 3.0147.419
  f 5 0.0120.625   7.82 × 10 3   6.84 × 10 2 10.81239.726
  f 6 0.7141.826   1.71 × 10 2   5.92 × 10 2 1.1713.365
Table 4. Comparison of optimization methods for the objective function (16).
Table 4. Comparison of optimization methods for the objective function (16).
Algorithm   E y   E x T
Genetic   1 × 10 6   1 × 10 6 1038
gradient descent   3.3 × 10 2   5.8 × 10 1 560
Joint optimization   1 × 10 6   1 × 10 6 170
Table 5. Comparison of optimization algorithms for the objective function (17).
Table 5. Comparison of optimization algorithms for the objective function (17).
Algorithm   E y   E x T
Ant Colony   1 × 10 6   1 × 10 6 1945
Simplex   8.9 × 10 1   3.2 × 10 1 661
Joint optimization   1 × 10 6   1 × 10 6 382
Table 6. Four material-related parameters.
Table 6. Four material-related parameters.
MaterialDensity (kg/m3)Modulus of Elasticity (Pa)
45 steel7800   2.1 × 10 11
Rolled bronze8700   1.03 × 10 11
Hard alloy aluminum2700   7.0 × 10 10
Table 7. Results of joint optimization.
Table 7. Results of joint optimization.
Design Variable45 SteelRolled BronzeHard Alloy
  x 1 / m 0.03420.03420.0342
  x 2 / m 0.01010.01010.0101
  x 3 / m 0.00180.00390.0032
  f ( x ) / kg 0.32510.76050.1943
Table 8. Comparison of the times of running the G-RBF surrogate model (T).
Table 8. Comparison of the times of running the G-RBF surrogate model (T).
MaterialGeneticDirect SearchJoint Optimization
45 steel68912663
Rolled bronze3917826
Hard alloy3728524
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, J.; Wang, L.; Gong, D. A Joint Optimization Algorithm Based on the Optimal Shape Parameter–Gaussian Radial Basis Function Surrogate Model and Its Application. Mathematics 2023, 11, 3169. https://doi.org/10.3390/math11143169

AMA Style

Sun J, Wang L, Gong D. A Joint Optimization Algorithm Based on the Optimal Shape Parameter–Gaussian Radial Basis Function Surrogate Model and Its Application. Mathematics. 2023; 11(14):3169. https://doi.org/10.3390/math11143169

Chicago/Turabian Style

Sun, Jian, Ling Wang, and Dianxuan Gong. 2023. "A Joint Optimization Algorithm Based on the Optimal Shape Parameter–Gaussian Radial Basis Function Surrogate Model and Its Application" Mathematics 11, no. 14: 3169. https://doi.org/10.3390/math11143169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop