Next Article in Journal
Comparison of Cognitive Differences of Artworks between Artist and Artistic Style Transfer
Next Article in Special Issue
A Spike Neural Network Model for Lateral Suppression of Spike-Timing-Dependent Plasticity with Adaptive Threshold
Previous Article in Journal
Automatic Recognition of Mexican Sign Language Using a Depth Camera and Recurrent Neural Networks
Previous Article in Special Issue
Scene Adaptive Segmentation for Crowd Counting in Population Heterogeneous Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Optimization Using Cooperative Garden Balsam Optimization with Multiple Populations

1
College of Information Engineering, Pingdingshan University, Pingdingshan 467002, China
2
College of Information Science and Technology, Donghua University, Shanghai 201620, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(11), 5524; https://doi.org/10.3390/app12115524
Submission received: 21 April 2022 / Revised: 25 May 2022 / Accepted: 26 May 2022 / Published: 29 May 2022

Abstract

:
Traditional multi-objective evolutionary algorithms (MOEAs) consider multiple objectives as a whole when solving multi-objective optimization problems (MOPs). In this paper, the hybridization of garden balsam optimization (GBO) is presented to solve multi-objective optimization, applying multiple populations for multiple objectives individually. Moreover, in order to improve the diversity of the solutions, both crowding distance computations and epsilon dominance relations are adopted when updating the archive. Furthermore, an efficient selection procedure called co-evolutionary multi-swarm garden balsam optimization (CMGBO) is proposed to ensure the convergence of well-diversified Pareto regions. The performance of the used algorithm is validated on 12 test functions. The algorithm is employed to solve four real-world problems in engineering. The achieved consequences corroborate the advantage of the proposed algorithm with regard to convergence and diversity.

1. Introduction

In recent years, numerous real-life issues have been formulated as MOPs, involving different fields such as manufacturing process optimization [1,2]. In fact, in single-objective optimization problems, the only challenge is covering a single optimal solution. On the other hand, MOPs have multiple contradictory objectives which need to be optimized synchronously [3,4]. Therefore, two goals commonly known as convergence and diversity are set forth.
As it is well-known, several population-based algorithms have been developed for MOP solving, a popular one of which is the NSGAII [5]. In this method, the population is classified via the non-dominated sorting procedure through which a rank of non-dominance is attributed to each individual. Moreover, to assess the density of the region around a given individual, the ones in the same rank are classified based on the crowding distance operator. Moreover, to improve diversity, the SPEA II was proposed in [6], which includes new fitness assignment based on a density estimator, using a truncation technique when updating a predefined archive size. However, several attempts have been made towards the success of such approaches. For example, Pareto solutions were stored in an external repository in multiple objective PSO (MOPSO) [7]. An effective selection strategy of Gbest, in which the objective space is divided in adaptive grids, has also been employed to maintain the diversity of solutions. In addition, to improve both convergence and diversity, the ε-dominance multiple objective evolutionary algorithm (ε-MOEA) was developed, using the ε-dominance relation [8]. However, in another example, to decompose the master multi-objective problem into several weighted sub-problems and, subsequently, to solve them simultaneously, the MOEA/D framework was proposed based on decomposition [9].
In multi-objective optimal processing, the preference-inspired co-evolutionary algorithm using goal vectors (PICEA-g) [10] is utilized to address a problem. Based on NN-DNSGA-II [11], an artificial neural network is combined with the NSGA II algorithm. However, the Indicator-Based Evolutionary Algorithm (IBEA) [12] adopts a new approach, in which indicators are directly used in the optimization process. Moreover, to solve multi-objective scheduling problems, the Chaotic PICEA-g-based algorithm was introduced in [10].
In recent years, new multi-objective methods have emerged mainly based on the swarm intelligence algorithm. Similar to MOPSO, the multi-objective Grey Wolf Optimizer (MOGWO) was proposed by [13]. Moreover, the multi-objective AntLion Optimizer (MOALO) was set forth by [14], in which a reference frame and a solution-coverage-based roulette selection are used to store the Pareto solutions and to continue the circulation of solutions, respectively. Based on the chicken swarm optimizer, it was combined with the external population in [15], using crowding calculations and ε-dominance to continue diversity to solve MOPs. In this context, in order to achieve the engineering design problem, the whale optimization algorithm was used in [16]. Moreover, the sorting and updating method was adopted based on NSGA II crowding distance to guide the population to the region with good diversity in the search space. Other well-known proposals can also be referred to, among which are the multi-objective Spotted Hyena Optimizer (MOSHO) [17], the hybrid multi-objective firefly (HMOFA) [18], the hybrid multi-objective cuckoo search (HMOCS) [19], the multi-objective Salp Swarm Algorithm (MSSA) [20], the multi-objective Artificial Sheep Algorithm (MOASA) [21] and the multi-objective Seagull Optimization Algorithm (MOSOA) [22].
Most of the MOEAs mentioned above suffer from fitness allocation problems. In their application, the efficiency of the search engine is substantially low unless fitness allocation is properly designed. To avoid this issue, Zhan et al. proposed the co-evolutionary multiswarm PSO (CMPSO) algorithm [23], in which multiple populations are used for independent objectives to be simultaneously optimized, and an external archive is set to preserve the achieved non-dominant solutions. Moreover, the archived information is used to lead the independent evolution of each population and to share information among various populations. Inspired by Zhan et al., a multi-objective cooperative differential evolution (CMODE) method was proposed by Wang et al. [24] to solve MOPs. In the method, the search engine is replaced by the adaptive differential evolution algorithm, which is similar to CMPSO.
Based on the achieved consequences, and inspired by CMPSO and CMODE, a hybrid cooperative multi-objective GBO (CMGBO) is introduced, in which multi-population studies are conducted to meet multiple objectives. The selection of the GBO algorithm is mainly motivated by its novelty and robustness in converging to potentially optimal solutions as well as in discovering promising regions in the search space. The application of these methods is justified by their encouraging consequences achieved by the use of ε-dominance and the notion of archives in the field of multi-objective optimization. Hence, the main contributions of this study can be summarized as follows:
  • Cooperative multi-objective garden balsam optimization (CMGBO) is proposed for solving MOPs;
  • To maintain a limited size of the Pareto set, an external archive is integrated into the CMGBO algorithm;
  • The ε-dominance relationship is adopted to update the archive;
  • Crowding distance calculations are used within the objective space to improve the diversity of solutions;
  • An effective strategy of the leader’s selection is proposed to further improve both the convergence and diversity of the Pareto front.
The rest of the paper is structured as follows. Section 2 introduces the fundamental concepts of MOPs and offers an overview of the GBO algorithm. Section 3 outlines the proposed CMGBO algorithm. The analyses and discussions of the experimental consequences are presented in Section 4. Finally, Section 5 concludes the paper.

2. Background

2.1. Multi-Objective Optimization Problem (MOP)

MOP consists of finding a set of decision vectors which optimize the objective vector, whose elements represent an ensemble of conflicting objective functions.
Mathematically, an MOP (for minimization case) is described as below [25]:
min F x = f 1 x , f 2 x , , f m x T s . t . g j x 0 , j = 1 , 2 , p
where x = x 1 , x 2 , , x n T X is the decision vector made up of n decision variables, X represents the decision space, f i x , i = 1 , 2 , 3 , , m signifies the objective function, m denotes the number of objective functions and g j x indicates the constraint function. Since several objective functions are simultaneously carried out by multi-objective optimization, the concept of dominance is used to compare the solutions.
In the minimization problem, a decision vector u X dominates another vector v X (symbolized by u v ), only if:
f i u f i v , i 1 , 2 , , m
The solution that is not dominated by other feasible ones is called non-dominated or the Pareto solution. Furthermore, the set of non-dominated solutions PS is known as the Pareto set and is defined as:
P S = u X , v X , u v
The image of the Pareto set PS by the objective vector represents the Pareto front (commonly known as Pareto front PF).

2.2. Epsilon Dominance

The ε-dominance is a relaxation relation proposed in [26], through which the performance of multi-objective evolutionary algorithms (MOEAs) can be improved. For a given relaxation ε R m vector (m being the number of objective functions, and ε i > 0 ), the solution u is said to ε -dominate v (denoted by u ε v ) if the condition f i u ε f i v is verified for all objective functions f i .
ε-dominance is an effective technique for maintaining the diversity of multi-objective optimization algorithms without losing convergence properties toward the Pareto-optimal set. Furthermore, due to its uncomplicated implementation, decision makers can intuitively control the number of available solutions.

2.3. Performance Metrics

In general, the performance of a multi-objective optimization method is evaluated by assessing its convergence, uniformity and spread. This method examines the approximation degree (distance) between the solution set and the real Pareto front, the uniformity of the solution set distribution on PF and the wide distribution of the solution set in the target space. Generally, the achieved solution set is expected to be as close to the PF, as evenly distributed on the PF and as complete to express the PF as possible. In our case, two widely used performance metrics, namely IGD and HV, are adopted for the comparison.
  • Inverted Generational Distance (IGD)
The IGD metric is one of the most used metrics in the field of MOPs, through which measuring both the convergence and diversity of the algorithms is carried out. It is mathematically formulated as [27]:
IGD A , P * = i = 1 P * min Dist p i P * , Dist p i = d p i , a j | a j A , j = 1 , 2 , , A
where A is the solution set achieved by the algorithm, P * denotes the solution set uniformly sampled along the real PF and d p i , a j indicates the Euclidean distance between the two points. Note that a lower IGD stands for better performance.
  • Hyper-Volume Metric (HV)
HV is used to calculate the volume of the region in the target space surrounded by the non-dominated solution set and the reference points. Although it can simultaneously evaluate convergence and diversity without providing real PF, it has high computational complexity [28]. Mathematically, HV is formulated as:
HV = δ i = 1 S v i
where δ is the Lebesgue measure, s denotes a non-dominant solution set and v i denotes the super volume.

2.4. Garden Balsam Optimization

In the simulation of garden balsam expansion and propagation, the algorithm iterates from the beginning. In the process, a mechanical and a secondary propagator, a mapping rule and a selection strategy are adopted in turn until the termination condition is satisfied, i.e., either the accuracy requirement of the problem is satisfied or the maximum iteration is attained [29,30]. The flow chart of the GBO algorithm is shown in Figure 1.
Here are the steps involved in the dispersal of the garden balsam population:
(1)
The initialization of a population is due to the few seeds scattered randomly on a specific area growing roots and producing a first-generation population;
(2)
Progeny reproduction: The natural conditions in the growing area cause each plant in the first-generation population to show different growth rates. The stronger plants bear more fruits and spawn more seeds.
As a consequence of individual x , the following number of seeds are produced:
S = f m a x f x f m a x f m i n × S m a x S m i n + S m i n
where f x is the fitness value, f m a x signifies the current population’s maximum fitness value, f m i n indicates the current population’s minimum fitness value, S m a x means the upper limit on the number of seeds and S m i n stands for the minimum number of seeds.
(3)
Mechanical transmission: Plants in good growing conditions bear fully grown fruits, more powerful ejection forces and, consequently, farther ejections of seeds.
The range of seed diffusion is calculated as follows:
A = i t e r m a x i t e r i t e r m a x n × f m a x f x f m a x f m i n × A init
When f m a x f x = 0 , or i t e r m a x i t e r = 0 , A = ε . ε denotes the min value, i t e r denotes the evolutionary iterations at present, i t e r m a x stands for the max iteration and n suggests a nonlinear harmonic factor.
(4)
Second transmission: For population diversity to increase, seeds are randomly transported from place to place by animals, water and wind.
Its manifestation is as follows:
x 1 = x B + F x 2 x 3
where x 1 is the new position of x 1 after the second transmission, x B denotes the best position, F stands for the zoom factor and x 2 and x 3 are the positions of two dissimilar seeds.
(5)
Competition-based elimination: The population size of a specific region is limited by N m a x . When the population size reaches the upper limit, elite seeds are retained, and redundant seeds are randomly eliminated. The number of elite seeds is calculated using Formula (9).
N b e s t = i t e r i t e r m a x N m a x
where N b e s t indicates the number of elite solutions, and i t e r and i t e r m a x are similar to those in Formula (7).

3. CMGBO: Cooperative Multi-Objective Garden Balsam Optimization

The details of the proposed ε-dominance-based CMGBO algorithm are outlined in this section. To handle multiple objectives, multiple populations are carried out in CMGBO. Moreover, in order to optimize the corresponding objective, a GBO is run for each population. In addition, an archive is also integrated to save and improve all those that are non-dominated. By using information from other populations through the archive, each population is searched along the PF. Figure 2 demonstrates the framework of CMGBO, in which Subpop and obj represent the population and optimization target. The stages through which the proposed cooperative multi-objective garden balsam optimization (CMGBO) operates are described in the following subsections.

3.1. Co-Evolutionary Mechanism

For the convenience of description, the evolution process of GBO is analyzed through giving an example of the mth population. Starting from initialization, the CMGBO randomly generates the mth population for the mth optimization objective within a given range. At the end of each evolution, its minimum and maximum values are determined. Subsequently, Formulas (6) and (7) are used to calculate S i and A m , i .
At this moment, the seed dispersion range of the ith solution X m , i is as follows:
X m , i i t e r + 1 = X m , i i t e r + A m , i i t e r × U 1 , 1 + P A m , r i t e r X m , i i t e r
where P is the scale parameter, A m , r i t e r denotes any solution in the archive and P A m , r i t e r X m , i i t e r represents the shared information fed back by other sub-populations. Thus, each population dispersion can not only access and search information from its own history, but it is also able to share the search information of other sub-populations. Accordingly, each solution focuses on spreading new seeds throughout the PF and attempts to avoid being drawn to the edge only by the search information of the current group.

3.2. Updating the Archive

For the CMGBO algorithm, a limited-size external population called “archive” is used. Two objectives are served by this archive: it saves the non-dominant solutions as well as updates the seed population. This external archive needs to be updated in each iteration; therefore, the ε-dominance approach is used to update the archive’s solutions. For each solution in the archive, we associate an identification vector O = O 1 , O 2 , , O m T with m as the number of objectives.
O i f = log f i log ε + 1
In the proposed archiving strategy, each of the solutions is compared to all of the archive’s elements. The details of the proposed archiving strategy, including all the possible cases, are presented in Algorithm 1.
Algorithm 1. Refresh the archive population pseudo-code.
Require: A(t): in iteration t, the group of outside archive; v : the new seed.
1:  if  a A t , O a O v  then
2:     v is not recognized
3:  else if  a A t , O v O a  then
4:     v exchanges a
5:  else if  a A t , O v O a  then
6:    if  v s  then
7:       Keep the seed with the smallest distance from O
8:    else
9:       Keep the seed that dominates the other
10:   end if
11: else
12:   Add the seed v in A(t)
13: end if

3.3. Crowding Distance Operator

To assess the solutions’ density around a given solution in the objective space, a crowding distance mechanism is used as the sorting operator of the archive. The crowding distance computation is presented in Algorithm 2.
Algorithm 2. Crowding distance computation algorithm.
Require: A: the external archive
1:   I = A
2:  for  a i A  do
3:       A a i d i s t a n c e = 0
4:  end for
5:  for each objective m do
6:       A = Sort A , m
7:       A 1 d i s t a n c e = A I d i s t a n c e =
8:      for  i = 2 to I 1 do
9:          A i d i s t a n c e = A i d i s t a n c e + A i + 1 m A i 1 m
10:     end for
11: end for
To improve the diversity of the method, the crowding distance is integrated into the updating archive procedure in our proposal. The crowding distance is first calculated for each solution in the archive. Subsequently, a sorting procedure is carried out to sort the archive based on the crowding distance. Next, the last CurrSize-MaxSize solutions are eliminated from the archive. Applying this strategy, the most crowded (CurrSize-MaxSize) solutions in the objective space are removed, leading to better diversity.

3.4. General Operating of the CMGBO Algorithm

The CMGBO algorithm starts with randomly generating the set of seeds. Grounded on epsilon dominance, the method identifies the non-dominated solutions to prepare the archive for the main loop. Subsequently, the new seed’s positions are calculated according to the GBO standard presented earlier in this paper. New seeds are considered as the archive candidates, applying the updated archive procedure presented earlier. Next, in each iteration, in order to sort the archive in decreasing order based on the computed crowding distance, the crowding distance of each solution is calculated. Therefore, in order to remove the most crowded seeds from the outsized archive, deletion is employed as the method. Finally, as shown in Algorithm 3, the external archive containing the consequences of the Pareto front is returned by the algorithm.
Algorithm 3. CMGBO algorithm.
Require: A; iter = 0; number of function evaluation, NFE = 0;
1:  for  m = 1 M  do %M is the number of objectives%
2:   for  i = 1 N i n i t  do % N i n i t is the pop size%
3:    randomly initialize individual p o p m i for the m th objective;
4:    compute objective values of p o p m i ;
5:    NFE = NFE + 1;
6:   end for
7:  end for
8:  refresh A by sub-populations ( m = 1 M p o p m );
9:  while iter < itermax do
10:  for  m = 1 M  do
11:    s e e d m = ϕ ; % ϕ empty set%
12:   calculate the fitness value of each seeds;
13:   for i = 1 p o p m  do
14:    compute S x i of x i p o p m ;
15:    compute the dispersion range of new seeds, A x i of x i p o p m ;
16:    for do
17:     generate new seeds of i th parent in m th population;
18:    end for
19:     N F E = N F E + n x i ;
20:   end for
21:   for j = 1 N s e c do
22:    randomly select a seed for second transmission;
23:   end for
24:    N F E = N F E + N s e c ;
25:   evaluate the new offspring s e e d m ;
26:    p o p m n e w = p o p m s e e d m ;
27:   sort the whole population ( p o p m n e w );
28:   Calculate the number of elite solutions N b e s t ;
29:    N b e s t solutions with the optimal adaptive value from p o p m n e w are selected to form the elite solution set E m ;
30:   if  N m a x < p o p m n e w  then
31:    truncate the population with elite-random selection algorithm until p o p m n e w = N m a x ;
32:   end if
33:  end for
34:  update archive population according to Algorithm 1;
35:   i t e r = i t e r + 1 ;
36: end while
Output: the members of A.

4. Consequences and Discussion

4.1. Experiment Settings

In order to examine the proposed algorithm, four algorithms, MOEA with decomposition and DE operators (MOEA/D-DE) [31], CMODE [24], multi-objective comprehensive learning PSO (MOCLPSO) [32], and CMPSO [23], were compared. As summarized in Table 1, the parameters of the mentioned algorithms were set. In addition, in order to score the performance of the method by using IGD and HV indicator metrics, ZDTs (two-objective test functions) [25], UFs (two-objective test functions) [33], DTLZs (tri-objective test functions) [34] and MaFs (multi-objective test functions) [35] were used as the benchmark test functions. In total, 15 test problems (5 ZDTs, 2 UFs, 2 DTLZs and 6 MaFs) were carried out, the characteristics of which are described in Table 2. M is the number of objectives. Finally, all of the algorithms were executed 30 times.

4.2. Experimental Results on ZDT Problems

Table 3 demonstrates a comparison of the experimental consequences of the five methods employed to solve the ZDT problem. The consequences show the satisfactory application prospects of CMGBO in the ZDT of the convex and concave PFs, the best performance of which was on ZDT1 of the convex PF and ZDT2 and ZDT6 of the concave PF. Single-peak double targets were observed for the three problems, indicating the strong approximation ability of CMGBO to the PF of multi-objective problems with simple targets. In addition, CMGBO was the third best on ZDT3, whose PF was not connected convex, which was not much different from MOCLPSO, which displayed the best performance. As can be observed in Table 3, all the algorithms performed poorly on ZDT4, with CMGBO also being the third best. This may be due to the multi-modal Rastrigin functions of the local PFs of ZDT4. In addition, considering the overall and average performances in all aspects, CMGBO demonstrated the best performance among the five algorithms. Moreover, comparable to CMPSO, the consequences of the test verified the significantly superior performance of CMGBO in the ZDT problem set compared with the other three competitors.
Figure 3 indicates non-dominated solutions achieved by different algorithms in solving the five ZDT problems. Since some of the algorithms demonstrated similar performances to CMGBO on the same problem, only CMGBO was used as a representative in the drawing. As can be observed in Figure 3, the solution achieved by CMGBO not only properly approximates the whole PF, it indicates good distribution on the whole PF.

4.3. Experimental Results on UF and DTLZ Problems

As can be observed in Table 4, CMODE performed the best on UF1, and CMGBO performed second on UF1. The performances on the DTLZ problems are also compared in this section. The consequences reveal that, although MOCLPSO displayed the best performance on DTLZ1, CMGBO ranked third. Moreover, CMPSO and CMGBO demonstrated the best and the second-best performances on DTLZ7, respectively. Moreover, CMGBO and CMPSO ranked first and second in UF and DTLZ on average. Being significantly better than the other three competitors, these two algorithms were also confirmed by the Wilcoxon rank sum test to occupy the first place in performance.
Figure 4 indicates the dominant solutions achieved by different methods in solving UF1 problems. CMODE and CMGBO displayed the best uniformity in UF1, and MOCLPSO showed the poorest. The solutions achieved by comparison methods in solving UF5 problems are indicated in Figure 5. In Figure 5e, it can be seen that the solution of CMGBO was closer to the Pareto front of UF5. Moreover, in Figure 5a, MOEA/D-DE demonstrated the poorest performance on UF5, and the approximation distance between the achieved solutions and the PF of UF5 was the largest.
Figure 6 indicates that the approximation degree between the solutions achieved by MOCLPSO and the Pareto front was the best when solving the DTLZ1 problem. CMPSO and CMGBO were placed at the second and the third ranks, respectively. However, the IGD values of the three are not significantly different. MOEA/D-DE and CMODE showed poor performances. This can be mainly attributed to the non-satisfactory distribution uniformity of the solutions achieved by MOEA/D-DE and CMODE. Furthermore, the local concentration phenomenon was obvious, resulting in their IGD values to be relatively large.
Figure 7 indicates the solutions achieved by different algorithms when solving the DTLZ7 problem. As can be observed, the solution achieved by CMGBO described in this study was relatively close to the real PF of DTLZ7, ranking it at second place next to CMPSO. However, the distribution of solutions achieved by MOEA/D-DE and CMODE between the Pareto fronts was not satisfactory. The non-dominant solutions achieved by MOEA/D-DE were mainly concentrated in the two regions, and the ones achieved by CMODE were distributed along the edges.

4.4. Experimental Results on the MaF Test Functions

The results of the IGD metric on the MaF test functions are shown in Table 5. For the MaF1 and MaF2 test functions, the CMPSO algorithm outperforms the other algorithms. Our approach comes directly after. For the Maf3, MaF4, MaF5 and MaF6 test functions, the CMGBO algorithm dominates its competitors, which proves the high competitiveness of our approach with respect to convergence for this set of test functions. Good convergence of the proposed CMGBO algorithm can be clearly observed in Figure 8, Figure 9 and Figure 10. On the other hand, the results for the HV metric are listed in Table 6. It is evident that CMGBO obtains the best statistical results in most cases, which means that the proposed algorithm benefits from a good equilibrium between convergence and diversity compared to the selected methods.
For the Wilcoxon test, the results are presented in Table 7 and Table 8, and we can clearly notice that, in almost all the adopted test functions, the p-values of the CMGBO algorithm compared to all the selected algorithms are less than 1%. These results suggest that the proposed CMGBO algorithm significantly outperforms its competitors with a significance level greater than 99%.

4.5. Discussion

CMGBO, which was designed in this paper, uses as many swarms as the objective number and lets each swarm focus on optimizing one objective. These swarms work cooperatively and communicate through an external hared archive. CMGBO benefits from the following three aspects when solving MOPs:
(1)
As each swarm focuses on optimizing one objective, it can use conventional or any other improved GBO to solve a single-objective problem. Importantly, the difficulty of fitness assignment can be avoided;
(2)
As an external shared archive is used to store the non-dominated solutions found by different swarms, and because the shared-archive information is used to guide the seed update, the algorithm can use the whole search information to approximate the whole PF quickly;
(3)
As ε-dominance is performed on the archived solutions in the update process, the algorithm is able to avoid local PFs. This is helpful for MOPs with multimodal objective functions or with complicated Pareto sets.
The performance of the proposed CMGBO was tested on different sets of MOPs with various objective functions, PFs and Pareto sets. CMGBO was compared with some state-of-the-art and modern MOEAs and MOPSOs, based on IGD and HV. The experimental results show that CMGBO not only generally outperforms the compared algorithms on ZDT problems, but it also generally performs remarkably better than all the other algorithms on DTLZ problems. When dealing with UF problems with complicated Pareto sets, CMGBO is also one of the most promising algorithms. The experimental results support the good performance of CMGBO on these metrics. Furthermore, the benefit of the shared archive used in CMGBO was investigated to demonstrate both the benefit of ε-dominance in bringing in diversity to avoid local PFs and the benefit of the archived information in guiding the particles to approximate the PF quickly. Lastly, the results indicate that the MPMO technique may contribute to reducing the search complexity for each swarm, and a small population size is efficient enough to obtain good performance.
The CMGBO algorithm proposed in this paper is an instantiation of the MPMO technique by using CMGBO. However, as it is a new and general technique using multiple populations to tackle multiple objectives in MOPs, there are still many interesting issues worth further studying in MPMO. Future research work will include the following aspects:
(1)
Applying other algorithms and their improved variants to realize MPMO for the further evaluation of the performance of the multiple-population algorithms in solving MOP. Nevertheless, the information-sharing and population communication strategy should be redesigned when new optimization algorithms are used. For example, we can let individuals perform crossover among different populations for GA information sharing and communication;
(2)
Using the MPMO-based multi-objective algorithms to solve MOPs with many objectives and to solve those problems in dynamic, noisy and uncertain environments;
(3)
Applying the new MPMO-based multi-objective algorithms to real-world problems.

5. Conclusions

A new multi-objective algorithm was proposed by investigating the recent garden balsam optimization method. To achieve this goal, the ε-dominance relation was adopted as an archiving technique, and the notion of an external archive was used to store the Pareto solutions achieved thus far. Moreover, an effective leader selection procedure derived from crowding distance computations was employed to lead the proposed CMGBO algorithm toward a well-distributed PF.
To prove the efficiency of the CMGBO, 15 benchmark functions of varying difficulty were used and compared with 4 powerful and recent algorithms. In the field of convergence and diversity, the proposed method demonstrated ideal and even superior consequences to the selected methods. For future research, the algorithm is recommended to be employed to discrete optimization problems, such as the feature selection problem.

Author Contributions

S.L. proposed the problems to be studied and pointed out research ideas. X.W. completed the research process and was a major contributor in writing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the key specialized research and development breakthrough of Henan province [222102320456], National Key Technologies R&D Program of China [2018YFB1308800]. And The APC was funded by [222102320456].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Andersson, J. A Survey of Multiobjective Optimization in Engineering Design; Department of Mechanical Engineering, Linktjping University: Linköping, Sweden, 2000. [Google Scholar]
  2. Marler, R.T.; Arora, J.S. Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 2004, 26, 369–395. [Google Scholar] [CrossRef]
  3. Cagnina, L.C.; Esquivel, S.C.; Coello, C.A.C. Solving engineering optimization problems with the simple constrained particle swarm optimizer. Informatica 2008, 32, 319–326. [Google Scholar]
  4. Deb, K. Multi-Objective Optimization using Evolutionary Algorithms; John Wiley & Sons Inc.: New York, NY, USA, 2001. [Google Scholar]
  5. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  6. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm; TIK-Report; Eidgenössische Technische Hochschule Zürich (ETH), Institut für Technische Informatik und Kommunikationsnetze (TIK): Zürich, Switzerland, 2001; Volume 103. [Google Scholar] [CrossRef]
  7. Coello, C.A.C.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  8. Deb, K.; Mohan, M.; Mishra, S. Evaluating the ε-Domination Based Multi-Objective Evolutionary Algorithm for a Quick Computation of Pareto-Optimal Solutions. Evol. Comput. 2005, 13, 501–525. [Google Scholar] [CrossRef]
  9. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  10. Paknejad, P.; Khorsand, R.; Ramezanpour, M. Chaotic improved piceag-based multi-objective optimization for workflow scheduling in cloud environment. Future Gener. Comput. Syst. 2021, 117, 12–28. [Google Scholar] [CrossRef]
  11. Ismayilov, G.; Topcuoglu, H.R. Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Future Gener. Comput. Syst. 2020, 102, 307–322. [Google Scholar] [CrossRef]
  12. Zitzler, E.; Künzli, S. Indicator-based selection in multiobjective search. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2004; pp. 832–842. [Google Scholar]
  13. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.d.S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
  14. Mirjalili, S.; Jangir, P.; Saremi, S. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2016, 46, 79–95. [Google Scholar] [CrossRef]
  15. Zouache, D.; Abdelaziz, F.B.; Lefkir, M.; Chalabi, N.E.-H. Guided moth-flame optimiser for multi-objective optimization problems. Ann. Oper. Res. 2021, 296, 877–899. [Google Scholar] [CrossRef]
  16. Got, A.; Moussaoui, A.; Zouache, D. A guided population archive whale optimization algorithm for solving multiobjective optimization problems. Expert Syst. Appl. 2019, 141, 112972. [Google Scholar] [CrossRef]
  17. Dhiman, G.; Kumar, V. Multi-objective spotted hyena optimizer: A Multi-objective optimization algorithm for engineering problems. Knowl.-Based Syst. 2018, 150, 175–197. [Google Scholar] [CrossRef]
  18. Wang, H.; Wang, W.; Cui, L.; Sun, H.; Zhao, J.; Wang, Y.; Xue, Y. A hybrid multi-objective firefly algorithm for big data optimization. Appl. Soft Comput. 2018, 69, 806–815. [Google Scholar] [CrossRef]
  19. Zhang, M.; Wang, H.; Cui, Z.; Chen, J. Hybrid multi-objective cuckoo search with dynamical local search. Memetic Comput. 2018, 10, 199–208. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  21. Lai, X.; Li, C.; Zhang, N.; Zhou, J. A multi-objective artificial sheep algorithm. Neural Comput. Appl. 2019, 31, 4049–4083. [Google Scholar] [CrossRef]
  22. Dhiman, G.; Singh, K.K.; Soni, M.; Nagar, A.; Dehghani, M.; Slowik, A.; Kaur, A.; Sharma, A.; Houssein, E.H.; Cengiz, K. MOSOA: A new multi-objective seagull optimization algorithm. Expert Syst. Appl. 2021, 167, 114150. [Google Scholar] [CrossRef]
  23. Zhan, Z.H.; Li, J.; Cao, J.; Zhang, J.; Chung HS, H.; Shi, Y.H. Multiple Populations for Multiple Objectives: A Coevolutionary Technique for Solving Multiobjective Optimization Problems. IEEE Trans. Syst. Man Cybern. 2013, 43, 445–463. [Google Scholar]
  24. Wang, J.; Zhang, W.; Zhang, J. Cooperative Differential Evolution with Multiple Populations for Multiobjective Optimization. IEEE Trans. Cybern. 2015, 46, 2848–2861. [Google Scholar] [CrossRef]
  25. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multiobjective evolutionary algorithms: Empirical consequences. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining Convergence and Diversity in Evolutionary Multiobjective Optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef] [PubMed]
  27. Coello, C.C.; Pulido, G. Multiobjective structural optimization using a microgenetic algorithm. Struct. Multidiscip. Optim. 2005, 30, 388–403. [Google Scholar] [CrossRef]
  28. Bhagavatula, S.S.; Sanjeevi, S.G.; Kumar, D.; Yadav, C.K.; Kumar, D. Multi-objective indicator based evolutionary algorithm for portfolio optimization. In Proceedings of the 2014 IEEE International Advance Computing Conference (IACC), Gurgaon, India, 21–22 February 2014; pp. 1206–1210. [Google Scholar]
  29. Li, S.; Sun, Y. A novel numerical optimization algorithm inspired from garden balsam. Neural Comput. Appl. 2020, 32, 16783–16794. [Google Scholar] [CrossRef]
  30. Li, S.; Sun, Y. Garden balsam optimization algorithm. Concurr. Comput. Pract. Exp. 2020, 32, e5456. [Google Scholar] [CrossRef]
  31. Li, H.; Zhang, Q. Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 2009, 13, 284–302. [Google Scholar] [CrossRef]
  32. Huang, V.; Suganthan, P.; Liang, J.J. Comprehensive learning particle swarm optimizer for solving multiobjective optimization problems. Int. J. Intell. Syst. 2005, 21, 209–226. [Google Scholar] [CrossRef]
  33. Zhang, Q.; Zhou, A.; Zhao, S.; Suganthan, P.N.; Liu, W.; Tiwari, S. Multiobjective optimization test instances for the CEC 2009 special session and competition. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC) 2009, Trondheim, Norway, 18–21 May 2009; pp. 1–30. [Google Scholar]
  34. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable Multi-Objective Optimization Test Problems. In Proceedings of the 2002 Congress on Evolutionary, Honolulu, HI, USA, 12–17 May 2002; pp. 825–830. [Google Scholar]
  35. Cheng, R.; Li, M.; Tian, Y.; Zhang, X.; Yang, S.; Jin, Y.; Yao, X. A benchmark test suite for evolutionary many-objective optimization. Complex Intell. Syst. 2017, 3, 67–81. [Google Scholar] [CrossRef]
Figure 1. Framework of GBO algorithm.
Figure 1. Framework of GBO algorithm.
Applsci 12 05524 g001
Figure 2. The framework of CMGBO.
Figure 2. The framework of CMGBO.
Applsci 12 05524 g002
Figure 3. Non-dominated solutions of ZDT problems.
Figure 3. Non-dominated solutions of ZDT problems.
Applsci 12 05524 g003aApplsci 12 05524 g003b
Figure 4. Non-dominated solutions of UF1: (a) MOEA/D-DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Figure 4. Non-dominated solutions of UF1: (a) MOEA/D-DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Applsci 12 05524 g004
Figure 5. Non-dominated solutions of UF5: (a) MOEA/D-DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Figure 5. Non-dominated solutions of UF5: (a) MOEA/D-DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Applsci 12 05524 g005
Figure 6. Non-dominated solutions of DTLZ1: (a) MOEA/D-DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Figure 6. Non-dominated solutions of DTLZ1: (a) MOEA/D-DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Applsci 12 05524 g006
Figure 7. Non-dominated solutions of DTLZ7: (a) MOEA/D−DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Figure 7. Non-dominated solutions of DTLZ7: (a) MOEA/D−DE; (b) CMODE; (c) MOCLPSO; (d) CMPSO; (e) CMGBO.
Applsci 12 05524 g007
Figure 8. Best Pareto fronts of MaFs obtained by each algorithm.
Figure 8. Best Pareto fronts of MaFs obtained by each algorithm.
Applsci 12 05524 g008aApplsci 12 05524 g008b
Figure 9. IGD metric characteristics of MaFs obtained by each algorithm.
Figure 9. IGD metric characteristics of MaFs obtained by each algorithm.
Applsci 12 05524 g009aApplsci 12 05524 g009b
Figure 10. HV metric characteristics of MaFs obtained by each algorithm.
Figure 10. HV metric characteristics of MaFs obtained by each algorithm.
Applsci 12 05524 g010aApplsci 12 05524 g010b
Table 1. Preferences settings.
Table 1. Preferences settings.
AlgorithmParameters
MOEA/D-DE N = 100 ,   C R = 1.0 ,   F = 0.5 ,   h = 20 , p m = 1 D ,   T = 20 ,   δ = 0.9 ,   and   n r = 2
CMODE N = 20 ,   A = 100 ,   C R = 1.0 ,   F = 0.5 ,   h = 20 , p m = 1 D ,   T = 20 ,   δ = 0.9 ,   and   n r = 2
MOCLPSO N = 100 ,   p c = 0.1 ,   p m = 0.4 ,   ω = 0.9 0.2 , and   c = 2
CMPSO N = 20 ,   A = 100 ,   ω = 0.9 0.2 , and   c 1 = c 2 = c 3 = 2.0
CMGBO N i n i t = 5 ,   N m a x = 20 ,   A = 100 ,   N s e c = 5 , n = 3 ,   F = 2 ,   and   P = 2
Table 2. Characteristics of multi-objective test functions.
Table 2. Characteristics of multi-objective test functions.
Test FunctionVariables MVariables DRangeOptimumCharacteristics
ZDT1230 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0 , 2 i D Convex
ZDT2230 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0 , 2 i D Concave
ZDT3210 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0 , 2 i D Convex, Disconnected
ZDT4210 x 1 0 , 1 , x i 5 , 5 , 2 i D x 1 0 , 1 ,   x i = 0 , 2 i D Concave
ZDT6210 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0 , 2 i D Concave
UF1
(DE)
230 x 1 0 , 1 , x i 1 , 1 , 2 i D x 1 0 , 1 , x i = sin 6 π x 1 + i π D , 2 i D Convex,
Disconnected
UF5
(DE)
230 x 1 0 , 1 , x i 1 , 1 , 2 i D F 1 , F 2 = i 2 N , 1 i 2 N , 0 i 2 N ,   N = 10 Scatter,
Disconnected
DTLZ1312 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Linear
DTLZ7312 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Concave,
Disconnected
MaF11019 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Linear
MaF21019 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Concave
MaF31019 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Convex,
Multimodal
MaF41019 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Concave,
Multimodal,
Badly Scaled
MaF51019 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Convex,
Biased,
Badly Scaled
MaF61019 x i 0 , 1 , 1 i D x 1 0 , 1 ,   x i = 0.5 , 2 i D Concave,
Degenerate
Table 3. Consequences on the ZDT problems.
Table 3. Consequences on the ZDT problems.
Test FunctionMOEA/D-DECMODEMOCLPSOCMPSOCMGBO
ZDT1Mean1.60 × 10−14.21 × 10−34.80 × 10−34.13 × 10−34.07 × 10−3
Std1.93 × 10−22.54 × 10−41.76 × 10−48.30 × 10−52.91 × 10−4
Rank5−3−4− 2 1
ZDT2Mean2.30 × 10−17.15 × 10−33.80 × 10−24.32 × 10−33.92 × 10−3
Std3.07 × 10−22.51 × 10−43.04 × 10−31.03 × 10−43.25 × 10−4
Rank5−3−4− 2 1
ZDT3Mean2.30 × 10−19.12 × 10−25.49 × 10−31.39 × 10−22.25 × 10−2
Std2.17 × 10−25.36 × 10−22.49 × 10−43.49 × 10−34.47 × 10−3
Rank5−4−1+2+3
ZDT4Mean3.10 × 10−14.90 × 10−13.267.90 × 10−15.70 × 10−1
Std2.30 × 10−13.20 × 10−11.352.60 × 10−12.20 × 10−1
Rank1+2+5−4−3
ZDT6Mean1.544.06 × 10−33.69 × 10−33.72 × 10−33.64 × 10−3
Std1.30 × 10−15.19 × 10−41.31 × 10−41.47 × 10−41.73 × 10−4
Rank5−4− 2 3 1
Final RankTotal211616139
Final53321
Better-Worse−3−3−20
AlgorithmMOEA/D-DECMODEMOCLPSOCMPSOCMGBO
‘+’, ‘−’ and ‘ ’ indicate that the values achieved from the method are distinctly superior to, inferior to and analogous to those of CMGBO by the Wilcoxon rank sum test with α = 0.05, respectively. The number of ‘+’ minus ‘−’ is shown by the ‘Better-Worse’ row. Values smaller than 0 mean that it is generally beaten by CMGBO.
Table 4. Consequences on UF and DTLZ problems.
Table 4. Consequences on UF and DTLZ problems.
Test FunctionMOEA/D-DECMODEMOCLPSOCMPSOCMGBO
UF1Mean6.95 × 10−24.68 × 10−28.52 × 10−27.57 × 10−24.79 × 10−2
Std3.00 × 10−28.48 × 10−32.41 × 10−22.01 × 10−21.96 × 10−2
Rank3− 1 5−4−2
UF5Mean4.60 × 10−12.42 × 10−13.80 × 10−12.52 × 10−12.08 × 10−1
Std1.06 × 10−19.58 × 10−29.25 × 10−24.04 × 10−23.24 × 10−2
Rank5−2−4−3−1
DTLZ1Mean9.47 × 10−25.67 × 10−22.07 × 10−32.75 × 10−33.15 × 10−3
Std3.25 × 10−22.21 × 10−23.49 × 10−44.36 × 10−45.27 × 10−4
Rank5−4−1+2+3
DTLZ7Mean4.79 × 10−26.61 × 10−34.36 × 10−22.42 × 10−33.73 × 10−3
Std9.51 × 10−38.92 × 10−39.12 × 10−36.05 × 10−35.92 × 10−3
Rank5−3−4−1+2
Final RankTotal181014108
Final52421
Better-Worse−4−3−20
AlgorithmMOEA/D-DECMODEMOCLPSOCMPSOCMGBO
‘+’, ‘−’ and ‘ ’ indicate that the values achieved from the method are distinctly superior to, inferior to and analogous to those of CMGBO by the Wilcoxon rank sum test with α = 0.05, respectively. The number of ‘+’ minus ‘−’ is shown by the ‘Better-Worse’ row. Values smaller than 0 mean that it is generally beaten by CMGBO.
Table 5. Statistical results for IGD on MaF test functions.
Table 5. Statistical results for IGD on MaF test functions.
AlgorithmBestWorstMedianAverageStd.
MaF1CMGBO1.08 × 10−21.72 × 10−21.29 × 10−21.32 × 10−21.20 × 10−3
CMPSO8.88 × 10−31.34 × 10−21.03 × 10−21.06 × 10−21.26 × 10−3
CMODE5.05 × 10−27.54 × 10−26.48 × 10−26.42 × 10−25.87 × 10−3
MOCLPSO3.32 × 10−25.62 × 10−24.48 × 10−24.52 × 10−25.49 × 10−3
MOEA/D-DE2.28 × 10−23.48 × 10−22.65 × 10−22.76 × 10−23.11 × 10−3
MaF2CMGBO2.29 × 10−23.20 × 10−22.63 × 10−22.68 × 10−22.27 × 10−3
CMPSO1.34 × 10−12.43 × 10−11.68 × 10−11.69 × 10−12.49 × 10−2
CMODE7.59 × 10−19.96 × 10−19.02 × 10−18.87 × 10−16.45 × 10−2
MOCLPSO5.36 × 10−19.61 × 10−17.44 × 10−17.56 × 10−19.22 × 10−2
MOEA/D-DE3.11 × 10−15.39 × 10−14.32 × 10−14.27 × 10−16.21 × 10−2
MaF3CMGBO4.91 × 10−24.67 × 10−31.17 × 10−31.64 × 10−31.54 × 10−3
CMPSO1.15 × 10−33.37 × 10−46.83 × 10−39.82 × 10−38.67 × 10−3
CMODE3.23 × 10−38.00 × 10−35.98 × 10−35.86 × 10−31.17 × 10−3
MOCLPSO1.86 × 10−24.75 × 10−32.98 × 10−32.96 × 10−31.11 × 10−3
MOEA/D-DE2.42 × 10−21.05 × 10−43.03 × 10−33.41 × 10−32.36 × 10−3
MaF4CMGBO6.94 × 10−26.32 × 10−11.12 × 10−11.92 × 10−11.97 × 10−1
CMPSO1.92 × 10−16.65 × 10−13.41 × 10−13.54 × 10−11.07 × 10−1
CMODE3.00 × 10−11.10 × 10−28.15 × 10−17.67 × 10−12.64 × 10−1
MOCLPSO8.34 × 1007.94 × 10−16.00 × 10−15.39 × 10−11.83 × 10−1
MOEA/D-DE7.07 × 1006.57 × 10+12.82 × 10+12.88 × 10+11.21 × 10+1
MaF5CMGBO3.64 × 10−25.48 × 10−24.23 × 10−24.33 × 10−24.17 × 10−3
CMPSO5.60 × 10−22.59 × 10−17.49 × 10−21.22 × 10−18.36 × 10−2
CMODE5.90 × 10−29.66 × 10−27.65 × 10−27.60 × 10−21.03 × 10−2
MOCLPSO1.25 × 10−13.58 × 10−12.51 × 10−12.47 × 10−16.77 × 10−2
MOEA/D-DE4.44 × 10−25.96 × 10−11.07 × 10−12.11 × 10−12.12 × 10−1
MaF6CMGBO7.40 × 10−41.14 × 10−38.59 × 10−48.70 × 10−49.54 × 10−5
CMPSO1.45 × 10−39.78 × 10−33.89 × 10−34.22 × 10−32.18 × 10−3
CMODE3.23 × 10−38.18 × 10−24.25 × 10−23.80 × 10−21.83 × 10−2
MOCLPSO5.54 × 10−32.52 × 10−21.39 × 10−21.47 × 10−24.38 × 10−3
MOEA/D-DE9.76 × 10−42.69 × 10−31.23 × 10−31.30 × 10−33.21 × 10−4
Table 6. Statistical results for HV on MaF test functions.
Table 6. Statistical results for HV on MaF test functions.
AlgorithmBestWorstMedianAverageStd.
MaF1CMGBO1.72 × 10−11.27 × 10−11.54 × 10−11.54 × 10−18.73 × 10−3
CMPSO1.97 × 10−11.80 × 10−11.90 × 10−11.89 × 10−14.11 × 10−3
CMODE7.53 × 10−23.15 × 10−24.59 × 10−24.73 × 10−29.39 × 10−3
MOCLPSO1.15 × 10−14.52 × 10−27.33 × 10−27.46 × 10−21.59 × 10−2
MOEA/D-DE1.43 × 10−11.15 × 10−11.33 × 10−11.31 × 10−17.37 × 10−3
MaF2CMGBO1.77 × 10−11.34 × 10−11.56 × 10−11.56 × 10−18.71 × 10−3
CMPSO1.99 × 1001.81 × 1001.92 × 1001.92 × 1004.35 × 10−2
CMODE6.93 × 10−15.57 × 10−15.86 × 10−15.90 × 10−12.75 × 10−3
MOCLPSO9.38 × 10−15.10 × 10−16.40 × 10−16.53 × 10−11.02 × 10−1
MOEA/D-DE1.33 × 1001.12 × 1001.25 × 1001.25 × 1005.75 × 10−2
MaF3CMGBO4.91 × 10−1001.58 × 10−28.82 × 10−2
CMPSO00000
CMODE00000
MOCLPSO00000
MOEA/D-DE00000
MaF4CMGBO3.94 × 10−1003.64 × 10−29.08 × 10−2
CMPSO00000
CMODE1.70 × 10−1005.48 × 10−23.05 × 10−2
MOCLPSO00000
MOEA/D-DE00000
MaF5CMGBO5.25 × 10−14.87 × 10−15.12 × 10−15.10 × 10−11.07 × 10−2
CMPSO4.87 × 10−13.07 × 10−14.43 × 10−14.16 × 10−16.26 × 10−2
CMODE4.47 × 10−13.89 × 10−14.26 × 10−14.22 × 10−11.60 × 10−2
MOCLPSO3.78 × 10−18.75 × 10−21.67 × 10−11.89 × 10−18.09 × 10−2
MOEA/D-DE5.19 × 10−19.09 × 10−24.36 × 10−13.65 × 10−11.53 × 10−1
MaF6CMGBO1.99 × 10−11.93 × 10−11.98 × 10−11.97 × 10−11.64 × 10−3
CMPSO1.96 × 10−11.38 × 10−11.90 × 10−11.87 × 10−11.22 × 10−2
CMODE1.66 × 10−109.27 × 10−28.76 × 10−24.38 × 10−2
MOCLPSO1.88 × 10−11.09 × 10−11.42 × 10−11.47 × 10−11.74 × 10−2
MOEA/D-DE1.97 × 10−11.77 × 10−11.95 × 10−11.93 × 10−14.91 × 10−3
Table 7. The p-values of the Wilcoxon test for the IGD metric.
Table 7. The p-values of the Wilcoxon test for the IGD metric.
CMGBO VS
CMPSOCMODEMOCLPSOMOPSO
MaF11.00 × 1007.01 × 10−127.01 × 10−127.01 × 10−12
MaF21.00 × 1007.01 × 10−127.01 × 10−127.72 × 10−12
MaF35.46 × 10−93.46 × 10−43.28 × 10−46.64 × 10−4
MaF43.46 × 10−43.60 × 10−96.00 × 10−87.15 × 10−3
MaF57.01 × 10−127.01 × 10−127.01 × 10−123.45 × 10−11
MaF67.01 × 10−127.01 × 10−127.01 × 10−128.99 × 10−11
Table 8. The p-values of the Wilcoxon test for the HV metric.
Table 8. The p-values of the Wilcoxon test for the HV metric.
CMGBO VS
CMPSOCMODEMOCLPSOMOPSO
MaF11.00 × 1007.01 × 10−127.01 × 10−125.67 × 10−11
MaF21.00 × 1007.01 × 10−127.01 × 10−127.01 × 10−12
MaF31.67 × 10−11.67 × 10−11.67 × 10−11.67 × 10−1
MaF45.57 × 10−32.15 × 10−25.57 × 10−35.57 × 10−3
MaF57.72 × 10−127.01 × 10−127.01 × 10−125.33 × 10−9
MaF62.04 × 10−107.01 × 10−127.01 × 10−122.50 × 10−7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Li, S. Multi-Objective Optimization Using Cooperative Garden Balsam Optimization with Multiple Populations. Appl. Sci. 2022, 12, 5524. https://doi.org/10.3390/app12115524

AMA Style

Wang X, Li S. Multi-Objective Optimization Using Cooperative Garden Balsam Optimization with Multiple Populations. Applied Sciences. 2022; 12(11):5524. https://doi.org/10.3390/app12115524

Chicago/Turabian Style

Wang, Xiaohui, and Shengpu Li. 2022. "Multi-Objective Optimization Using Cooperative Garden Balsam Optimization with Multiple Populations" Applied Sciences 12, no. 11: 5524. https://doi.org/10.3390/app12115524

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop