Next Article in Journal
Recent Trends on the Mechanical Properties of Additive Manufacturing
Previous Article in Journal
Information Extraction from Industrial Sensor Data Using Time Series Meta-Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Particle Swarm Optimization for High-Dimensional Latin Hypercube Design Problem

1
State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Engineering Co., Ltd., Shenzhen 518172, China
2
Institute of Nuclear Energy Safety Technology, China Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
3
Science Island Branch, Graduate School of USTC, Hefei 230026, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(12), 7066; https://doi.org/10.3390/app13127066
Submission received: 27 April 2023 / Revised: 2 June 2023 / Accepted: 7 June 2023 / Published: 12 June 2023

Abstract

:
Latin Hypercube Design (LHD) is widely used in computer simulation to solve large-scale, complex, nonlinear problems. The high-dimensional LHD (HLHD) problem is one of the crucial issues and has been a large concern in the long run. This paper proposes an improved Hybrid Particle Swarm Optimization (IHPSO) algorithm to find the near-optimal HLHD by increasing the particle evolution speed and strengthening the local search. In the proposed algorithm, firstly, the diversity of the population is ensured through comprehensive learning. Secondly, the Minimum Point Distance (MPD) method is adopted to solve the oscillation problem of the PSO algorithm. Thirdly, the Ranked Ordered Value (ROV) rule is used to realize the discretization of the PSO algorithm. Finally, local and global searches are executed to find the near-optimal HLHD. The comparisons show the superiority of the proposed method compared with the existing algorithms in obtaining the near-optimal HLHD.

1. Introduction

Computer models in the real world usually have the characteristics of complex modeling, large-scale parameters, and nonlinear response surfaces [1,2,3], which makes it time-consuming to obtain high-precision experimental results. Therefore, reducing the number of experiments is significant in improving Design of Experiments efficiency (DOE) [4,5]. A good DOE should include two properties: the design’s unfolding [6] and good space-filling [7].
Latin Hypercube Design (LHD) is a widely used unfolding method [8,9]. Still, this method can only be guaranteed to unfold in one-dimensional projections. Moreover, it cannot guarantee an effectively filled input parameter space. Various criteria for measuring the quality of space filling are provided in the literature, for example, the maximum distance criterion [6,10], the Audze–Eglais (AE) criterion [11,12,13], the φp criterion [6,14,15,16,17], central L2 difference [18], and the entropy criterion [15]. To efficiently find the optimal LHD, some authors have proposed algorithms that can improve the quality of LHD space filling. Liefvendahl and Stocki [12] proposed an algorithm called Columnwise Pairwise (CP), which is a simple local search algorithm. The algorithm obtains a better LHD by swapping two elements on the same column in the LHD matrix. The algorithm performs well on small-scale problems but cannot handle large-scale optimization problems. Morris and Mitchell [14] proposed a simulated annealing search algorithm and pointed out that some optimal LHDs have symmetric structures. One stochastic local search method, proposed by Jin [16], is named the Enhanced Stochastic Evolution (ESE) algorithm. The contribution of this work is to obtain better solutions through the cooperation of the inner and outer loops. The optimization problem also has limitations, and it is usually challenging to obtain the optimal solution. Doerr [19] and Rainville [20] used a genetic algorithm (GA) to generate low-discrepancy sequences with a high degree of concordance, which was limited in local search ability and easily trapped in locally optimal solutions. Bates et al. [21] proposed a population algorithm based on a GA, using a permutation genetic algorithm (PermGA) combined with local search and global search to construct the optimal LHD. The algorithm can achieve better results in the optimization of medium-scale problems. However, due to the limitations of the local search algorithm, it is not easy to find and achieve the optimal solution for the large-scale problem. In addition, some other optimized LHD algorithms include Iterative Local Search (ILS) [6], Sequential Local Enumeration (SLE) [22], Sliced Latin Hypercube Design (SLHD) [23], Simulation-based Sequencing Optimization for Annealing (SOBSA) [24], and Inflation, Expansion, and Stacking (IES) algorithms [25].
In addition to the above optimization algorithms, iterative heuristic algorithms are designed to optimize different types of problems in various fields because of their easy implementation and ability to optimize complex issues effectively. Particle Swarm Optimization (PSO) [26,27,28,29] is one famous algorithm proposed by Eberhart and Kennedy in 1995. The idea of this algorithm comes from a simplified model of bird flocks’ food-seeking behavior [26]. The respective positions are updated through information exchange between individual birds, combining the current best position with the individual’s history to find the optimal value in the range. Due to its simplicity and high efficiency, it is widely used in various fields in the real world [30,31,32,33,34]. PSO can solve high-dimensional optimization problems with multiple optimal solutions, optimized and applied to the HLHD problem. Chen et al. [35] proposed a LaPSO to optimize LHD by reducing the Hamming distance to obtain the optimal LHD. Mahdi Aziz et al. [36] proposed the Adaptive Memetic Particle Swarm (AMPSO) algorithm, which excels in solving large-scale LHD problems. The efficiency of finding the optimal LHD combined with the PSO can still be improved. The above works are novel and can serve as a valuable reference. However, there are still some problems that need to be addressed in applying PSO to LHD optimization:
(1) Since the PSO algorithm’s evolution method is not suitable for discrete issues in LHD, subsequent discretization operations are required when using the solution obtained by PSO.
(2) PSO is highly dependent on the dynamic equation of its evolution. The value of the control parameters in the equation determines the convergence speed and the pros and cons of evolution. There may be insufficient space exploration, and the optimal solution cannot be obtained [31].
(3) Each particle in the PSO learns its personal best particle and global best particle simultaneously. The inappropriate selection of parameters in the dynamic equation will make the population change less frequently during the evolution process, which may lead to the phenomenon of “oscillation” [37,38] and the premature convergence of the population [39]. It will reduce the search efficiency of the algorithm, be unable to fully explore the research space, and fall into the local optimal solution [31,40].
(4) Although PSO can quickly find the optimal solution when dealing with low-dimensional problems, optimizing high-dimensional issues makes it more prone to premature convergence [41]. In practical applications, as the dimension increases, the PSO algorithm is more prone to boundary violation problems during the search process, which affects the evolution effect of particles.
In this work, we proposed a high-dimensional LHD design framework based on the improved hybrid PSO (IHPSO). First, the diversity of the population is ensured by using comprehensive learning. Second, the oscillation problem of the PSO is solved by the minimum point distance (MPD), accelerating the calculation process. Third, the ranked ordered value (ROV) rule is adopted to realize the discretization of the PSO. Finally, local and global searches are executed to search for the near-optimal HLHD.
The remainder of this paper is organized as follows: Section 2 introduces the technical background, including the LHD definition, the PSO algorithm, and the space-filling criteria. In Section 3, the composition of the proposed PSO algorithm for the fast construction of optimal LHD is presented. Numerical results and discussion are shown in Section 4. Finally, Section 5 concludes this paper and proposes an outlook for improving the performance of the proposed algorithm.

2. Technical Background

2.1. Latin Hypercube Design

For an LHD matrix with n design points and m dimensions, each matrix column represents a dimension. Each row is a design point. The method of constructing the LHD matrix is shown in Equation (1) [42]:
x i , j = H ( i , j ) ε ( 0 , 1 ) n
where H(i,j) is the column function of the LHD, the value is [1, n] and does not repeat; ε(i,j) is a uniformly distributed random independent variable of [0, 1], whose value is set as 0.5 for the convenience of calculation.
The non-space-filling and space-filling LHD are shown in Figure 1. An LHD with n samples and m factors is built. Each factor is divided into n levels, and the arrangement of the points makes each level precisely one point. Therefore, LHD has a one-dimensional projected non-collapsing property [43], indicating that projected n-point onto any factor will lead to exactly n different factors. The LHD of the n-th order m factor can be expressed as an n × m matrix. Each row of the matrix corresponds to a design point, and each column is a factor. The total number of LHD designs with n design points and m factors is n ! m 1 , which increases rapidly with the increase in n or m. Although the LHD is guaranteed not to collapse, the randomly generated LHD may not always fill the space.

2.2. Particle Swarm Optimization and Comprehensive PSO

PSO is a population-based, heuristic method to solve high-dimensional optimization with optimal solutions. Since the proposal of PSO [26], most of the variants in PSO have been developed to focus on the optimization problems in different scenarios [30,33,34,38,44,45,46,47,48]. After PSO initializes the position of each particle, it will determine the speed of each particle based on the self-cognition and social learning factors of each particle. The position is then updated, and the evaluation value at this new position is assessed. In the whole process, all particles seek the optimal value through social interaction and cooperation [35].
The detailed updated processes of speed and position of particles are shown as:
v i t + 1 = ω v i t + c 1 r 1 t ( p B e s t i t x i t ) + c 2 r 2 t ( g B e s t t x i t )
x i t + 1 = x i t + v i t + 1
where ω is the weight of the inertia, constants c 1 and c 2 are acceleration coefficients, and r 1 t and r 2 t are two uniformly distributed random numbers generated independently in the range of [0, 1]. In Equation (2), p B e s t i t is the best adaptive position of the i-th particle found thus far, and g B e s t i t is currently the best overall position. c 1 represents self-cognition factors, and c 2 represents social learning factors. c 1 and c 2 decide the learning degree in which they balance the individual and global best position, searching for the optimal particle.
The search environment of LHD is complex, and a large number of the local optimal solutions are assembled in it. Searching the LHD by the PSO can easily cause particles to be attracted to the gBest zone and fall into the local optimal solution. The current particles may find the corresponding regions of the optimal solution in some dimensions, but they may have poor fitness due to poor performance in other dimensions. This paper proposed an improved hybrid method to use the different dimensions of learning different particles to improve the searching ability of PSO.
For each dimension of each particle, a random number is generated. If it is greater than the set learning probability Pi, the pBest of the corresponding dimension will be learned. Otherwise, the pBest of other particles will be further learned. The method of selecting other particles is to randomly select two particles from the population and choose the current dimension of the particle with better applicability for learning. The speed update equation of the comprehensive learning strategy is shown in Equation (4):
v i d = ω × v i d + c × r a n d i d × ( p B e s t f i ( d ) d x i d )
and the flow of the comprehensive learning strategy is shown in Figure 2.

2.3. ϕp Criterion

The commonly used index to measure the space-filling quality of experimental design is ϕp, which is the widespread criterion in the literature of optimal LHD [6,14,15,16,17,35]. The criterion points out that the better the space-filling of the design, the smaller the calculated ϕp value. The function equation of ϕp is:
ϕ p = [ i = 1 n j = i + 1 n d i j p ] 1 p
where p is a positive integer and n is the number of points. d i j represents the distance between points. x i , x j , and d i j can be expressed as:
d i j = [ k = 1 m | x i k x j k | t ] 1 / t
where x i k and x j k represent the k-th component of point i and point j, respectively. In general, set p = 50 and t = 1 [16]. In this paper, the ϕp criterion is used to evaluate the optimization performance of the LHD.

3. Proposed Method

The PSO can rapidly find the optimal solution when dealing with optimization problems with lower dimensions. However, it is prone to appear premature convergence when faced with higher dimension problems. It falls into the predicament of the local optimal solution [31,40].
In this section, we proposed an optimization framework based on the following procedures:
(1) The ROV rule is firstly used [49] to transform continuous space into discrete space in the PSO evolution.
(2) The MPD is adopted to solve the oscillation problem.
(3) Four local search strategies are used to make the global best particle gBest go to a better position.
(4) An improved hybrid method is adopted based on the PSO to preserve the diversity of the population and solve the problem of premature convergence.
(5) The boundary reflection strategy is adopted to address the problem of exceeding the boundary in the evolution of high-dimensional particle swarms.
The pseudo-codewords of the IHPSO algorithm are shown in Algorithm 1.
Algorithm 1. Pseudo-codewords of proposed IHPSO algorithm
1:
 Initialize particle position and velocity
2:
 Evaluate the function value of different particles
3:
 Initialize the pBest
4:
 Initialize the gBest data
5:
  while t < MI (Maximum Iterations)
6:
     if (flag < m)
7:
       Use Equation (1) for speed update
8:
     else
9:
       Use Equation (2) for speed update
10:
    Update the position of each particle
11:
    Discretize particle position data using ROV rules
12:
    Update the personal best particle pBest
13:
    Optimize gBest using the local search algorithm; if it is not better than gBest, randomly replace a current particle
14:
    Update the global optimal particle gBest
15:
    t = t + 1
16:
  end
17:
 return gBest
18:
 end

3.1. Ranked Ordered Value Rule

The continuous solutions must be converted into discrete solutions to make the PSO algorithm suitable for optimizing discrete problems. To this end, the ROV rule with random key representation [49] is used to convert the continuous space of the particle position in the PSO into an arrangement of design points. ROV sorts and transforms the position of the updated particles and assigns the minimum value in the updated position dimension to the minimum value set in the LHD. Separate each column of the updated matrix in a similar manner. Convert the i-th largest value of the matrix to the i-th largest discrete value set in the LHD; an example of the conversion is shown in Figure 3. It shows an LHD particle after the position is updated, and the corresponding form of the particle is Y after the transformation. In Y, each row represents a design point, and each column represents the position of the design point in each dimension. Obtain an LHD particle by executing the ROV rule on each column.

3.2. Minimum Point Distance Method

According to Equation (2), each particle in PSO will learn its personal best particle pBest and global best particle gBest at the same time. When the personal best particle pBest and the global best particle gBest are in opposite directions from the current position x i and the positions of pBest and gBest may be quite different, one particle may linger between pBest and gBest, which will cause “oscillation” [37,38] and reduce the search efficiency of the algorithm.
The MPD method is proposed to avoid the oscillation problem of LHD particles. By rearranging the rows of the personal best particle pBest of the LHD particles, the distance between the rows of the LHD particle and the global best particle gBest is minimized. The Euclidean distance is used to calculate the distance between points, and the calculation equation is:
d i j = [ k = 1 m ( x i k x j k ) 2 ] 1 / 2
By calculating the row spacing between each row of the personal best particle pBest and each row of the global best particle gBest, the rows of the current particles are rearranged, and then the particle speed and position are updated. The specific operation is shown in Figure 4. In this way, the personal best particle pBest and the global best particle gBest can be in the same direction as the current particle, reducing the “oscillation” problem caused by the learning strategy.

3.3. Local Search Algorithm for gBest

The global best particle gBest is different from other particles, and there is no example to refer to evolution. This paper uses four local search strategies to improve local search capabilities.
Inversion: Randomly select a specific area of one column in the matrix and invert the data in the area.
Clipping: Randomly select a specific area of one column in the matrix, randomly select a position in the area, and exchange the data in the area at and above the position with the data in the area below the position.
Center symmetry: Randomly select a certain section of a column in the matrix and convert the symmetrical area with the center into data symmetrical to the data section of the position.
Exchange: randomly select two positions in the matrix to exchange.
After improving the global best particle by the local search algorithm, the original global best particle will be replaced only when the value of ϕp is less than the current gBest. Otherwise, a new position will be used to replace a particle in the population randomly.
A schematic diagram of the four local search strategies is shown in Figure 5.

3.4. Boundary Processing Scheme

In the PSO algorithm, the particles exceeding the boundary during the movement process will significantly be increased [50,51]. An appropriate boundary processing scheme will improve the optimization capabilities of PSO when optimizing high-dimensional problems. This article solves the boundary problem by using the boundary reflection processing scheme. A schematic diagram of the boundary reflection scheme is shown in Figure 6.

4. Numerical Results and Discussion

In this section, parameter research and numerical comparison of the proposed algorithm will be carried out. After scaling the LHD to [0, 1]k, calculate the ϕp value. We set the population size to 50 and iterate 1000 times. All the results are the average after multiple independent runs.

4.1. Evolutionary Strategy

The evolutionary strategy with the MPD method is tested. In the dimension of 14 × 60, running 30 times independently with and without this algorithm, the average value obtained is shown in Figure 7. Numerical results highlight the performance of the proposed evolutionary algorithm. The particles will evolve slowly when this algorithm is not used, indicating that the proposed method can slightly suppress the oscillation phenomenon and speed up the evolution of particles. However, the difference is not significant, so the improvement is limited.

4.2. Comprehensive Learning Strategy

In this section, we examine the role of comprehensive learning strategies. The refresh interval m is set to 7, the learning probability P i is 0.05 + 0.45 × g e n / m a x _ g e n , and the inertia factor is 0.2 + 0.2 × g e n / m a x _ g e n . The algorithm works best, among which gen is the current number of iterations and m a x _ g e n is the maximum number of iterations. In the 14 × 60 dimension, each refresh interval runs 30 times independently, and the average value obtained is shown in Figure 8. Since the numerical result is as small as possible, we can find that setting the refresh interval to 7 is the most consistent with the proposed algorithm.

4.3. Comparison Results

Several state-of-the-art methods are compared with our proposed method in this section, including Comprehensive Learning PSO (CLPSO) [52,53], Adaptive Memetic PSO (AMPSO) [36], traditional PSO [26,28], Enhanced Stochastic Evolutionary Algorithm (ESE) [16], Genetic Algorithm (GA) [12], and LaPSO [35]; other algorithms perform performance comparison, after scaling LHD to [0, 1]k, and calculate the value of ϕp.
Table 1 and Table 2 show 20 sets of the value of ϕp with dimensions m = 2, 6, 10, 14, 18 and n = 20, 40, 60, 80. The best results are shown in bold. Meanwhile, we selected five dimensions of 2 × 60, 6 × 60, 10 × 60, 14 × 60, and 18 × 60 to draw the population evolution diagram, shown in Figure 9. Furthermore, a box plot of 18 × 60 between the proposed algorithm and the compared algorithm was drawn, as shown in Figure 10.
As shown in Table 1 and Table 2, the proposed algorithm outperforms all the compared algorithms in most cases of numerical tests. The numerical results in six dimensions are also close to the optimal data in the compared algorithms. The proposed algorithm is superior to the compared algorithms in high dimensions after parameter adjustment and has greater stability. Experiments find that the initial values of c 1 ,   c 2 , and ω in Equation (2) are 0.5, 2.7, and 0.25, respectively. When the global optimal particle gBest is updated, the parameters in Equation (2) change into the initial value. With the increase in iterations, c 1 gradually increases and c 2 decreases. The method is to enhance the search capabilities of particle swarms, shown as:
| c i ( t + 1 ) c i ( t ) | δ ,   i = 1 , 2
The increment or decrement between the two generations of particles should not be too large. After testing, when the change δ   is set as a random number in the interval [0.05, 0.1], the experimental result is appropriate. The changes in c 1 and c 2 are limited, where the upper limit of c 1 is 1.8 and the lower limit of c 2 is 1.9.
From the population evolution trend shown in Figure 9, it can be found that the proposed algorithm has a very high evolution efficiency. The reduction rate of the φp value is faster than other algorithms. There is no evolutionary stagnation, fast convergence, and excellent numerical results. In the case of low dimensionality, the proposed algorithm can evolve rapidly. After approximately 200 generations of evolution, its performance is better than almost all compared algorithms. In the case of high dimensionality, after approximately 100 generations of evolution, its performance can be better than all the compared algorithms. The numerical results of the proposed algorithm perform better in all dimensions.
From the box plot in Figure 10, it can be found that in the high dimension of 18 × 80, the proposed algorithm exhibits good performance compared with the other algorithms. On the one hand, the value of the IHPSO algorithm is smaller, which indicates that the proposed algorithm can fill the space completely. On the other hand, the IHPSO algorithm is numerically concentrated, indicating that the performance is relatively stable. Furthermore, the maximum value of the IHPSO algorithm is still smaller than the minimum result among other algorithms, which indicates that the proposed IHPSO has a good performance in optimizing HLHD and can obtain better results than other algorithms.

5. Conclusions

This paper proposes an improved hybrid PSO (IHPSO) to optimize the High-dimensional LHD (HLHD), which combines local and global search strategies. First, the minimum point spacing method is proposed to solve the oscillation problem in the PSO. Second, the ranked ordered value (ROV) is adopted to achieve the discretization of PSO. Third, a comprehensive learning strategy is used to ensure the diversity of the population in the evolution process. The superiority of the proposed algorithm is verified by numerical comparison with six other state-of-the-art algorithms. The advantages of the proposed method are shown as follows:
(1) The proposed IHPSO can obtain a better HLHD than the compared algorithm at the same number of iterations. It performs better in high dimensions, and the numerical results are significantly better than the compared algorithms.
(2) IHPSO can effectively improve the efficiency of obtaining the near-optimal HLHD. Compared with the compared algorithm, IHPSO converges faster and performs better in optimizing medium and large LHDs. IHPSO shows better local search capabilities and global search performance.
(3) IHPSO effectively promotes convergence in the initial optimization stage and maintains the convergence performance until the end of the iteration. Even in the case of a small number of iterations, IHPSO still exhibits excellent performance. Under the same number of iterations, IHPSO can obtain HLHD with better space-filling.
(4) IHPSO balances the local search and global search capabilities by adaptively adjusting the parameters of the PSO equation, ensuring the diversity of the population, and can quickly obtain the near-optimal HLHD.
Although our proposed IHPSO algorithm is efficient and stable and can search for the optimal solution, it needs further optimization. IHPSO has many parameters that need to be discussed. These parameters significantly impact the algorithm’s performance and will be further researched in future work.

Author Contributions

Conceptualization, Z.X. and F.W.; methodology, software, validation, Z.X. and D.X.; formal analysis, J.W.; resources, J.L.; writing-original draft preparation, Z.X.; writing-review and editing, N.Y.; visualization, S.X.; supervision, D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key R&D Program of China (2018YFB1900301) National Natural Science Foundation of China (71901203 and 72204246). At the same time, the authors sincerely thank the editor and anonymous reviewers for their insightful comments that help us improve the quality of the article.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All relevant data are included in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kianifar, M.R.; Campean, F.; Wood, A. Application of permutation genetic algorithm for sequential model building-model validation design of experiments. Soft Comput. 2016, 20, 3023–3044. [Google Scholar] [CrossRef] [Green Version]
  2. Han, Z.H.; Zhang, Y.; Song, C.X.; Zhang, K.S. Weighted Gradient-Enhanced Kriging for High-Dimensional Surrogate Modeling and Design Optimization. AIAA J. 2017, 55, 4330–4346. [Google Scholar] [CrossRef]
  3. Yao, Y.T.; Wang, J.Y.; Xie, M. Adaptive residual CNN-based fault detection and diagnosis system of small modular reactors. Appl. Soft Comput. 2022, 114, 108064. [Google Scholar] [CrossRef]
  4. Wu, Z.P.; Wang, D.H.; Okolo, P.N.; Zhao, K.; Zhang, W.H. Efficient space-filling and near-orthogonality sequential Latin hypercube for computer experiments. Comput. Methods Appl. Mech. Eng. 2017, 324, 348–365. [Google Scholar] [CrossRef]
  5. Garud, S.S.; Karimi, I.A.; Kraft, M. Design of computer experiments: A review. Comput. Chem. Eng. 2017, 106, 71–95. [Google Scholar] [CrossRef]
  6. Grosso, A.; Jamali, A.; Locatelli, M. Finding maximin latin hypercube designs by Iterated Local Search heuristics. Eur. J. Oper. Res. 2009, 197, 541–547. [Google Scholar] [CrossRef]
  7. Crombecq, K.; Laermans, E.; Dhaene, T. Efficient space-filling and non-collapsing sequential design strategies for simulation-based modeling. Eur. J. Oper. Res. 2011, 214, 683–696. [Google Scholar] [CrossRef] [Green Version]
  8. Ma, Y.M.; Xiao, Y.; Wang, J.; Zhou, L.B. Multicriteria Optimal Latin Hypercube Design-Based Surrogate-Assisted Design Optimization for a Permanent-Magnet Vernier Machine. IEEE Trans. Magn. 2022, 58, 1–5. [Google Scholar] [CrossRef]
  9. Yao, Y.T.; Yang, M.H.; Wang, J.Y.; Xie, M. Multivariate Time-Series Prediction in Industrial Processes via a Deep Hybrid Network Under Data Uncertainty. IEEE Trans. Ind. Inf. 2023, 19, 1977–1987. [Google Scholar] [CrossRef]
  10. van Dam, E.R.; Husslage, B.; Melissen, H.H. Maximin Latin hypercube designs in two dimensions. Oper. Res. 2007, 55, 158–169. [Google Scholar] [CrossRef] [Green Version]
  11. Audze, P.; Eglais, V. New approach for planning out of experiments. Probl. Dyn. Strengths 1977, 35, 104–107. [Google Scholar]
  12. Liefvendahl, M.; Stocki, R. A study on algorithms for optimization of Latin hypercubes. J. Stat. Plan. Inference 2006, 136, 3231–3247. [Google Scholar] [CrossRef] [Green Version]
  13. Fuerle, F.; Sienz, J. Formulation of the Audze-Eglais uniform Latin hypercube design of experiments for constrained design spaces. Adv. Eng. Softw. 2011, 42, 680–689. [Google Scholar] [CrossRef]
  14. Morris, M.D.; Mitchell, T.J. Exploratory designs for computational experiments. J. Stat. Plan. Inference 1995, 43, 381–402. [Google Scholar] [CrossRef] [Green Version]
  15. Ye, K.Q.; Li, W.; Sudjianto, A. Algorithmic construction of optimal symmetric Latin hypercube designs. J. Stat. Plan. Inference 2000, 90, 145–159. [Google Scholar] [CrossRef]
  16. Jin, R.C.; Chen, W.; Sudjianto, A. An efficient algorithm for constructing optimal design of computer experiments. J. Stat. Plan. Inference 2005, 134, 268–287. [Google Scholar] [CrossRef]
  17. Viana, F.A.C.; Venter, G.; Balabanov, V. An algorithm for fast optimal Latin hypercube design of experiments. Int. J. Numer. Meth. Eng. 2010, 82, 135–156. [Google Scholar] [CrossRef] [Green Version]
  18. Fang, K.T.; Ma, C.X.; Winker, P. Centered L-2-discrepancy of random sampling and Latin hypercube design, and construction of uniform designs. Math. Comput. 2002, 71, 275–296. [Google Scholar] [CrossRef] [Green Version]
  19. Doerr, C.; De Rainville, F.M. Constructing Low Star Discrepancy Point Sets with Genetic Algorithms. In Proceedings of the 15th Genetic and Evolutionary Computation Conference (GECCO), Madrid, Spain, 11–15 July 2013; Association Computing Machinery: Amsterdam, The Netherlands, 2013. [Google Scholar] [CrossRef]
  20. De Rainville, F.M.; Gagne, C.; Teytaud, O.; Laurendeau, D. Evolutionary Optimization of Low-Discrepancy Sequences. ACM Trans. Model. Comput. Simul. 2012, 22, 1–25. [Google Scholar] [CrossRef] [Green Version]
  21. Bates, S.J.; Sienz, J.; Langley, D.S. Formulation of the Audze-Eglais Uniform Latin Hypercube design of experiments. Adv. Eng. Softw. 2003, 34, 493–506. [Google Scholar] [CrossRef]
  22. Zhu, H.G.; Liu, L.; Long, T.; Peng, L. A novel algorithm of maximin Latin hypercube design using successive local enumeration. Eng. Optimiz. 2012, 44, 551–564. [Google Scholar] [CrossRef]
  23. Ba, S.; Myers, W.R.; Brenneman, W.A. Optimal Sliced Latin Hypercube Designs. Technometrics 2015, 57, 479–487. [Google Scholar] [CrossRef]
  24. Pholdee, N.; Bureerat, S. An efficient optimum Latin hypercube sampling technique based on sequencing optimisation using simulated annealing. Int. J. Syst. Sci. 2015, 46, 1780–1789. [Google Scholar] [CrossRef]
  25. Guiban, K.L.; Rimmel, A.; Weisser, M.A.; Tomasik, J. The First Approximation Algorithm for the Maximin Latin Hypercube Design Problem. Oper. Res. 2018, 66, 253–266. [Google Scholar] [CrossRef]
  26. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN 95), Perth, WA, Australia, 27 November–1 December 1995; IEEE: Perth, Australia, 1995. [Google Scholar]
  27. Houssein, E.H.; Gad, A.G.; Hussain, K.; Suganthan, P.N. Major Advances in Particle Swarm Optimization: Theory, Analysis, and Application. Swarm Evol. Comput. 2021, 63, 100868. [Google Scholar] [CrossRef]
  28. Kennedy, M.C.; O’Hagan, A. Predicting the output from a complex computer code when fast approximations are available. Biometrika 2000, 87, 1–13. [Google Scholar] [CrossRef] [Green Version]
  29. Sengupta, S.; Basak, S.; Peters, R.A., II. Particle Swarm Optimization: A Survey of Historical and Recent Developments with Hybridization Perspectives. Mach. Learn. Knowl. Extr. 2019, 1, 157–191. [Google Scholar] [CrossRef] [Green Version]
  30. Bratton, D.; Kennedy, J. Defining a standard for particle swarm optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007; IEEE: Honolulu, HI, USA, 2007. [Google Scholar]
  31. Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
  32. Clerc, M.; Kennedy, J. The particle swarm—Explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef] [Green Version]
  33. Dixit, A.; Mani, A.; Bansal, R. CoV2-Detect-Net: Design of COVID-19 prediction model based on hybrid DE-PSO with SVM using chest X-ray images. Inform. Sci. 2021, 571, 676–692. [Google Scholar] [CrossRef] [PubMed]
  34. Gheisari, S.; Meybodi, M.R. BNC-PSO: Structure learning of Bayesian networks by Particle Swarm Optimization. Inform. Sci. 2016, 348, 272–289. [Google Scholar] [CrossRef]
  35. Chen, R.B.; Hsieh, D.N.; Hung, Y.; Wang, W.C. Optimizing Latin hypercube designs by particle swarm. Stat. Comput. 2013, 23, 663–676. [Google Scholar] [CrossRef]
  36. Aziz, M.; Tayarani-N, M.H. An adaptive memetic Particle Swarm Optimization algorithm for finding large-scale Latin hypercube designs. Engin. Appl. Artif. Intel. 2014, 36, 222–237. [Google Scholar] [CrossRef]
  37. Van den Bergh, F.; Engelbrecht, A.P. A cooperative approach to particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 225–239. [Google Scholar] [CrossRef]
  38. Xu, G.P.; Cui, Q.L.; Shi, X.H.; Ge, H.W.; Zhan, Z.H.; Lee, H.P.; Liang, Y.C.; Tai, R.; Wu, C.G. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  39. Ye, H.T.; Luo, W.G.; Li, Z.Q. Convergence Analysis of Particle Swarm Optimizer and Its Improved Algorithm Based on Velocity Differential Evolution. Comput. Intel Neurosc. 2013, 2013, 384125. [Google Scholar] [CrossRef] [Green Version]
  40. Bansal, J.C.; Singh, P.K.; Saraswat, M.; Verma, A.; Jadon, S.S.; Abraham, A. Inertia Weight strategies in Particle Swarm Optimization. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011. [Google Scholar] [CrossRef] [Green Version]
  41. Larsen, R.B.; Jouffroy, J.; Lassen, B. On the premature convergence of particle swarm optimization. In Proceedings of the European Control Conference (ECC), Aalborg, Denmark, 29 June–1 July 2016. [Google Scholar] [CrossRef]
  42. Qian, P. Sliced Latin Hypercube Designs. J. Am. Stat. Assoc. 2012, 107, 393–399. [Google Scholar] [CrossRef]
  43. Santner, T.J.; Williams, B.J.; Notz, W.I. The Design and Analysis of Computer Experiments, 2nd ed.; Springer: Berlin, Germany, 2019. [Google Scholar]
  44. Gong, Y.J.; Zhang, J.; Chung, H.S.H.; Chen, W.N.; Zhan, Z.H.; Li, Y.; Shi, Y.H. An Efficient Resource Allocation Scheme Using Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2012, 16, 801–816. [Google Scholar] [CrossRef]
  45. Michaloglou, A.; Tsitsas, N.L. Feasible Optimal Solutions of Electromagnetic Cloaking Problems by Chaotic Accelerated Particle Swarm Optimization. Mathematics 2021, 9, 2725. [Google Scholar] [CrossRef]
  46. Alkayem, N.F.; Cao, M.; Shen, L.; Fu, R.; Sumarac, D. The combined social engineering particle swarm optimization for real-world engineering problems: A case study of model-based structural health monitoring. Appl. Soft Comput. 2022, 123, 108919. [Google Scholar] [CrossRef]
  47. Michaloglou, A.; Tsitsas, N.L. A Brain Storm and Chaotic Accelerated Particle Swarm Optimization Hybridization. Algorithms 2023, 16, 208. [Google Scholar] [CrossRef]
  48. Alkayem, N.F.; Shen, L.; Al-hababi, T.; Qian, X.; Cao, M. Inverse Analysis of Structural Damage Based on the Modal Kinetic and Strain Energies with the Novel Oppositional Unified Particle Swarm Gradient-Based Optimizer. Appl. Sci. 2022, 12, 11689. [Google Scholar] [CrossRef]
  49. Bean, J.C. Genetic algorithms and random keys for sequencing and optimization. Inform. J. Comput. 1994, 6, 154–160. [Google Scholar] [CrossRef]
  50. Chu, W.; Gao, X.G.; Sorooshian, S. Handling boundary constraints for particle swarm optimization in high-dimensional search space. Inf. Sci. 2011, 181, 4569–4581. [Google Scholar] [CrossRef] [Green Version]
  51. Helwig, S.; Wanka, R. Particle swarm optimization in high-dimensional bounded search spaces. In Proceedings of the IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007. [Google Scholar] [CrossRef]
  52. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  53. Wu, Q.; Zhang, C.J.; Zhang, M.Y.; Yang, F.J.; Gao, L. A modified comprehensive learning particle swarm optimizer and its application in cylindricity error evaluation problem. Math. Biosci. Eng. 2019, 16, 1190–1209. [Google Scholar] [CrossRef]
Figure 1. Two LHD based examples where n = 10, m = 2.
Figure 1. Two LHD based examples where n = 10, m = 2.
Applsci 13 07066 g001
Figure 2. Comprehensive particle learning strategy flowchart.
Figure 2. Comprehensive particle learning strategy flowchart.
Applsci 13 07066 g002
Figure 3. Use the ROV rule to convert the position value of particle X to the arrangement of design point Y.
Figure 3. Use the ROV rule to convert the position value of particle X to the arrangement of design point Y.
Applsci 13 07066 g003
Figure 4. Schematic diagram of the line transformation between the global best position (left) and the personal best position (right) (The row spacing between the first row on the left and the right is 7.0, 3.0, 3.741, 3.606, 6.633, and 5.831, so the data in the first row and the second row need to be exchanged).
Figure 4. Schematic diagram of the line transformation between the global best position (left) and the personal best position (right) (The row spacing between the first row on the left and the right is 7.0, 3.0, 3.741, 3.606, 6.633, and 5.831, so the data in the first row and the second row need to be exchanged).
Applsci 13 07066 g004
Figure 5. Four different local search strategies, showing a 4-dimensional LH with six design points (specific operations such as dark shaded area display) (a) Inversion, (b) Clipping, (c) Center symmetry, (d) Exchange.
Figure 5. Four different local search strategies, showing a 4-dimensional LH with six design points (specific operations such as dark shaded area display) (a) Inversion, (b) Clipping, (c) Center symmetry, (d) Exchange.
Applsci 13 07066 g005
Figure 6. Schematic diagram of the boundary processing scheme.   x i is the current position, x i + 1 prime is the position after the position is updated, and x i + 1 is the final position.
Figure 6. Schematic diagram of the boundary processing scheme.   x i is the current position, x i + 1 prime is the position after the position is updated, and x i + 1 is the final position.
Applsci 13 07066 g006
Figure 7. The variation of φp with MPD based evolutionary strategy.
Figure 7. The variation of φp with MPD based evolutionary strategy.
Applsci 13 07066 g007
Figure 8. Numerical results of different refresh intervals.
Figure 8. Numerical results of different refresh intervals.
Applsci 13 07066 g008
Figure 9. Performance trends of different LHD algorithms: (a) 2 × 60, (b) 6 × 60, (c) 10 × 60, (d) 14 × 60, and (e) 18 × 60.
Figure 9. Performance trends of different LHD algorithms: (a) 2 × 60, (b) 6 × 60, (c) 10 × 60, (d) 14 × 60, and (e) 18 × 60.
Applsci 13 07066 g009
Figure 10. Boxplots of different algorithms at 18 × 80 LHD problem.
Figure 10. Boxplots of different algorithms at 18 × 80 LHD problem.
Applsci 13 07066 g010
Table 1. Comparison of the IHPSO algorithm with CLPSO, AMPSO, PSO, ESE, GA, and LaPSO.
Table 1. Comparison of the IHPSO algorithm with CLPSO, AMPSO, PSO, ESE, GA, and LaPSO.
DimensionAlgorithmMeasuren = 20n = 40n = 60n = 80
m = 2CLPSOMean4.276958.0316511.708415.1155
Std0.2681340.3148260.6940021.082523
AMPSOMean4.0828466.4306638.77017111.02583
Std0.0820870.3388950.4195170.576738
PSOMean4.0559586.0514478.30621617.43493
Std8.88 × 10−160.3157691.1982851.608196
ESEMean5.0707279.85230514.7366119.39307
Std0.0422390.5586170.9121211.359123
GAMean4.0624015.8524197.4731888.786267
Std0.1120580.1490280.3401560.418199
LaPSOMean4.1186226.6285548.19077411.61435
Std0.0871710.3395390.399030.261372
IHPSOMean3.9810645.8256577.3705588.743883
Std0.1884320.1238550.278010.316564
m = 6CLPSOMean0.7559570.9991811.1821431.325422
Std0.0130960.0144280.0189330.020479
AMPSOMean0.6909270.8820161.0159021.118163
Std0.0095450.0141750.0109080.018196
PSOMean0.6953910.9649311.1541551.300088
Std0.0118540.0195340.0237780.034823
ESEMean0.8435971.1306841.3384381.505415
Std0.0109280.0201770.0213910.022172
GAMean0.6597010.7980810.8955810.971953
Std0.0089560.0074650.0077730.005263
LaPSOMean0.6672120.8281050.9362951.027103
Std0.0095030.0092130.0105320.010696
IHPSOMean0.6669720.8062420.9064490.984813
Std0.0095710.0097010.0145460.020373
m = 10CLPSOMean0.4000050.4909800.5509810.595821
Std0.0040020.0048360.0049820.005118
AMPSOMean0.3724300.4445160.4929980.528137
Std0.0030880.004110.003870.004777
PSOMean0.3706970.4732440.5376670.584522
Std0.0052840.0077590.0095530.010786
ESEMean0.4340700.5394330.6052480.653991
Std0.0049170.0054370.0072930.00531
GAMean0.3554870.4161470.454210.484463
Std0.0026060.0026660.002420.002496
LaPSOMean0.3575630.4191520.4597980.491216
Std0.0028620.0031520.0039870.003844
IHPSOMean0.3552330.4099270.4471990.475609
Std0.0029950.0028260.0044590.004361
Table 2. Comparison of the proposed IHPSO with CLPSO, AMPSO, PSO, ESE, GA, and LaPSO.
Table 2. Comparison of the proposed IHPSO with CLPSO, AMPSO, PSO, ESE, GA, and LaPSO.
DimensionAlgorithmMeasuren = 20n = 40n = 60n = 80
m = 14CLPSOMean0.2694250.319320.3503860.373451
Std0.0019270.0029990.002840.00256
AMPSOMean0.2537950.2945970.3216710.34096
Std0.0016830.0021710.0019370.002072
PSOMean0.2525750.3103130.3433010.366767
Std0.0024710.0044410.0040860.004428
ESEMean0.289670.3447360.3790430.403719
Std0.0024990.0031960.0033770.003723
GAMean0.2464270.2826420.3055830.322375
Std0.0016530.0014790.0017410.001401
LaPSOMean0.2445890.2794480.3017280.318759
Std0.0014950.0012720.0016390.001395
IHPSOMean0.2431450.2754250.294210.31047
Std0.0013270.002250.0015690.001791
m = 18CLPSOMean0.2020250.2350490.2548720.269295
Std0.0013440.0016540.0015840.001466
AMPSOMean0.1921680.2199660.2368420.24933
Std0.0014950.0010270.0011750.001658
PSOMean0.1913440.2272830.2501790.265241
Std0.0013570.0034130.0027310.003079
ESEMean0.2162710.252110.2731860.28809
Std0.0013820.0013730.0018730.001753
GAMean0.1883780.2138310.2287160.240114
Std0.001190.0010740.0009330.000972
LaPSOMean0.1856350.2092970.2242070.234901
Std0.0007840.000940.0007660.000898
IHPSOMean0.1844720.205690.2188150.230125
Std0.0008910.0008420.0008890.001343
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Z.; Xia, D.; Yong, N.; Wang, J.; Lin, J.; Wang, F.; Xu, S.; Ge, D. Hybrid Particle Swarm Optimization for High-Dimensional Latin Hypercube Design Problem. Appl. Sci. 2023, 13, 7066. https://doi.org/10.3390/app13127066

AMA Style

Xu Z, Xia D, Yong N, Wang J, Lin J, Wang F, Xu S, Ge D. Hybrid Particle Swarm Optimization for High-Dimensional Latin Hypercube Design Problem. Applied Sciences. 2023; 13(12):7066. https://doi.org/10.3390/app13127066

Chicago/Turabian Style

Xu, Zhixin, Dongqin Xia, Nuo Yong, Jinkai Wang, Jian Lin, Feipeng Wang, Song Xu, and Daochuan Ge. 2023. "Hybrid Particle Swarm Optimization for High-Dimensional Latin Hypercube Design Problem" Applied Sciences 13, no. 12: 7066. https://doi.org/10.3390/app13127066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop