Next Article in Journal
Heavy-Tailed Probability Distributions: Some Examples of Their Appearance
Next Article in Special Issue
A New Class of Irregular Packing Problems Reducible to Sphere Packing in Arbitrary Norms
Previous Article in Journal
Visual Analytics Using Machine Learning for Transparency Requirements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi–Objective Gaining–Sharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems

by
Nour Elhouda Chalabi
1,
Abdelouahab Attia
2,3,
Khalid Abdulaziz Alnowibet
4,
Hossam M. Zawbaa
5,
Hatem Masri
6 and
Ali Wagdy Mohamed
7,8,*
1
Computer Science Department, University Mohamed Boudiaf of Msila, Msila 28000, Algeria
2
LMSE Laboratory, Mohamed El Bachir El Ibrahimi University of Bordj Bou Arreridj, Bordj Bou Arreridj 34000, Algeria
3
Computer Science Department, University Mohamed El Bachir El Ibrahimi of Bordj Bou Arreridj, Bordj Bou Arreridj 34000, Algeria
4
Statistics and Operations Research Department, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
5
CeADAR Ireland’s Center for Applied AI, Technological University Dublin, D7 EWV4 Dublin, Ireland
6
Applied Science University, Sakhir 32038, Bahrain
7
Operations Research Department, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza 12613, Egypt
8
Applied Science Research Center, Applied Science Private University, Amman 11937, Jordan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3092; https://doi.org/10.3390/math11143092
Submission received: 17 April 2023 / Revised: 7 July 2023 / Accepted: 10 July 2023 / Published: 13 July 2023

Abstract

:
Metaheuristics in recent years has proven its effectiveness; however, robust algorithms that can solve real-world problems are always needed. In this paper, we suggest the first extended version of the recently introduced gaining–sharing knowledge optimization (GSK) algorithm, named multiobjective gaining–sharing knowledge optimization (MOGSK), to deal with multiobjective optimization problems (MOPs). MOGSK employs an external archive population to store the nondominated solutions generated thus far, with the aim of guiding the solutions during the exploration process. Furthermore, fast nondominated sorting with crowding distance was incorporated to sustain the diversity of the solutions and ensure the convergence towards the Pareto optimal set, while the ϵ -dominance relation was used to update the archive population solutions. ϵ -dominance helps provide a good boost to diversity, coverage, and convergence overall. The validation of the proposed MOGSK was conducted using five biobjective (ZDT) and seven three-objective test functions (DTLZ) problems, along with the recently introduced CEC 2021, with fifty-five test problems in total, including power electronics, process design and synthesis, mechanical design, chemical engineering, and power system optimization. The proposed MOGSK was compared with seven existing optimization algorithms, including MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA. The experimental findings show the good behavior of our proposed MOGSK against the comparative algorithms in particular real-world optimization problems.

1. Introduction

In recent years, multiobjective optimization problems (MOPs) have been given significant attention by researchers in solving real-world optimization problems [1], where they deal with multiple contradicting objectives. Due to their robustness; MOP methods are widely used in various fields [2]. Two major approaches, a priori and a posteriori, have been planned to solve MOP problems  [3,4]. When using an a priori approach, the MOP is turned into a single-objective problem (SOP) using a weight vector that describes the implication of each objective. Such approaches produce one Pareto solution set [5]. In general, a priori approaches are not feasible, due to the need to have the decision makers provide a weight for each objective. Consequently, the a posteriori approach selects the effectively distributed set of solutions, also known as nondominated solutions. Then, the decision makers can select a fitting solution. Techniques and algorithms that can deal with MOP have been studied used heavily over the years. Those algorithms are generally metaheuristic algorithms; they are based on two concepts—exploration and exploitation—and most of the time, a metaheuristic algorithm looks for a balance between those two. Now, metaheuristic algorithms are categorized based on the origin of their inspiration, such as evolution, swarm intelligence, physics-based origins, and human-related origins.
Evolutionary-based techniques [6] are well known and widely used; the most representative algorithm of this category is the genetic algorithm (GA) [7]. GA is founded on biological evolution; it has also an extended version that solves MOP, named the nondominating sorting genetic algorithm (NSGA) [8], where the aspect of the nondominated sort was first introduced. The nondominating sorting genetic algorithm (NSGAII) [9] is another version of NSGA; in this version, the fast nondominated sorting approach (FNS) and crowding distance (CD) are presented. In the Pareto archive evolutionary strategy PAES [10], the implication of external archives is introduced; numerous multiobjective-based evolutionary algorithms have been designed and studied, including the multiobjective evolutionary algorithm based on decomposition, also recognized as MOEA/D [11], eMOEA [9,12], SPEA2 [13], KnEA [14], GrEA [15], and many others. The remarkable successes motivated researchers to investigate and plan other multiobjective evolutionary algorithms, such as the harmony search algorithm [16], water cycle algorithm [17], ant lion optimizer [2], and discrete cooperative swarm intelligence algorithm [18]. Kumawat et al. [19] proposed a multiobjective whale optimization algorithm (MOWOA). Moreover, Mohamed Abdel-Basset and Mirjalili [20] introduced an extension of the whale optimization algorithm for solving MOP. Also, Mohamed Abdel-Basset et al. [21] enhanced the equilibrium algorithm for resolving multiobjective problems based on the archive approach; to maintain diversity among the nondominated solutions, Pareto optimal solutions and the crowding distance metric were also used. Wang et al. [22] introduced a multiobjective evolutionary method incorporated by a uniformly evolving system to locate nondominated solutions that are homogeneously distributed on the true Pareto optimal curve to provide flexibility of the decision makers while choosing the suitable solutions. Swarm intelligence-based metaheuristic algorithms are based generally on the homogeneous movement of an agent. It should be noted that there has been massive work in this category. Over the last three decades, these categories in particular have assumed the lead and continue to do so, where a huge number of those algorithms have been introduced and applied [23]. The most known one is particle swarm optimization (PSO) [24]; several versions of this technique, in particular multiobjective particle swarm optimization (MOPSO) [25], have been employed to solve MOP, where the concept of dominance is used. Another well-known algorithm is ant colony optimization (ACO) [26] as well as its extended version for handling MOP, named multiobjective ant colony optimization (MOACO) [27]. There is also the ant lion optimizer (ALO) [28] with MOALO as a multiobjective extended version [2]. Another recently introduced algorithm, the whale optimization algorithm (WOA) [29], is an algorithm that imitates humpback whales’ social behavior. A guided marine predator optimization for multiobjective issues, known as GMOMPA, was introduced in a recent article [30]; it is based on mono-objective marine predator optimization (MPA), and this work incorporates an external archive to keep the best solutions found so far; in addition, it uses the epsilon dominance relation to update the archive solutions. Fast nondominated sorting (FNS) and crowding distance (CD) were used to keep a balance between exploration and exploitation.
Human-related algorithms are numbered [31], since we as humans have a limited understating of the human brain. A human is considered a highly intelligent being since he or she holds several critical abilities, such as understanding, reasoning, identifying, communicating, solving a problem, and many more. Therefore, inspiration from such a creature to develop an algorithm sounds reasonable, and might help in solving critical real-life issues. One of the oldest known human-related algorithms is the cultural algorithm [32], inspired from cultural evolution and the mechanism of inheritance of that culture; another known and widely used algorithm is the harmony search algorithm [33], where the improvisation of music players is the base of its inspiration. The league championship-inspired algorithm [34] is another one that is based on the dynamics of the competition of sports teams in a sports league, such as the league schedule, pair play, losses and wins, and team formation. The teaching–learning-based optimization [35] algorithm is based on the effect of the influence of a teacher on learners. Proposed by Yuhui, brain storm optimization [36] is an algorithm based on the brainstorming process. Cohort intelligence [37] is an optimization algorithm inspired by the social and natural urge for people to learn from one another. The soccer league competition algorithm [38] was developed based on competitions between clubs and players in soccer leagues. The ideology algorithm [39] draws inspiration from the self-centered and competitive behavior of political party members who are driven to raise their standing. The competitiveness between volleyball teams is what inspired the volleyball premier league algorithm [40] algorithm. Te life-choice-based optimizer [41] is a recent optimizer based on how people often make decisions to achieve their goals and pick up knowledge from others. The future search algorithm [42] simulates the person’s life when a person looks for a better life than the one he or she has. The forensic-based investigation optimizer [43] is inspired by police officers’ methods for locating, pursuing, and investigating suspects. The dynastic optimization algorithm [44] is based on the social behavior of human dynasties. Finally, there is anticoronavirus optimization [45]. All these human-related algorithms have been used to solve different optimization problems.
Recently, a new optimization algorithm was introduced, named gaining–sharing knowledge optimization (GSK) [31]. GSK is an optimization algorithm with a nature-inspired background that is based on the acquiring process of information and knowledge, as well as sharing it during a human’s life span; this algorithm distinguishes two phases: the first phase is the junior (child) gaining–sharing knowledge phase, while the second phase is the senior (adult) gaining–sharing knowledge phase, where it follows how a junior shares and gains knowledge and the change of that process when moving to adulthood. GSK as an optimization algorithm showed great potential where several binary GSK versions had been proposed and applied, such as the gaining–sharing knowledge-based S- and V-shaped feature selection algorithm [46] and the binary GSK for the location of the fault in distribution networks via mutation [47]; in this work, a new mutation-based enhanced binary gaining–sharing algorithm (IBGSK) is introduced and applied to the converted binary fault section location (FSL). In another work, Agrawal et al. [48] proposed a binary GSK to solve the known knapsack problems. In addition to the various binary GSKs and their application, numerous works have applied GSK, such as Li et al. [49]’s recent work, where GSK is applied to optimize the parameters in the proposed structure of fault section diagnosis (FSD) based on the Takagi–Sugeno fuzzy neural networks; this structure is designed to deal effectively with the issues related to uncertainties of protective relays and circuit breakers existing within power system faults. In a different work proposed by Ortega-Sánchez et al. [50] that tackles the issue of the identification of apple diseases via digital images, here, when segmenting apple images with the disease, GSK is used to minimize cross-entropy thresholding. Hassan et al. [51] propose and use a binary version of GSK to address scheduling issues of the technical counseling process for using the electricity generated by solar energy power; in this work, a new application problem is introduced, named the traveling counseling problem (TCP). Xiong et al. [52] engaged GSK in feature extraction; this work handles the solar photovoltaic (PV) system, and it is crucial to precisely create an equivalent model of the PV cell and derive the relevant unidentified model parameters for it to function effectively. In this case, GSK is employed to achieve this. Lastly, it is safe to say that GSK is a powerful tool for optimization, not to mention that GSK has another version with adaptive parameters [53,54]; this version tackles the issue of the appropriate parameters that should be selected and set for GSK to give the best results. Moreover, it is motivated by the ability of GSK and the no free lunch (NFL) theorem [55], which states that there is no optimization algorithm capable of solving all sorts of optimization problems with this fact applied to both single and multiobjective optimization. In addition, the fact is that human-related algorithms that can solve mono-objective optimization problems are limited [56], let alone the ones that can solve multiobjective problems. Furthermore, real-world optimization problems are increasing every day, and a tool that can help solve them is much needed. Therefore, this study presents the first-ever extended version of the recently introduced gaining–sharing knowledge optimization (GSK) algorithm to solve MOPs named MOGSK. To pass from a single-objective optimization to a multiobjective optimization, several strategies were adapted, which can be summarized as follows:
  • We proposed an MOGSK to solve multiobjective optimization problems.
  • The external archive was incorporated to maintain the nondominant solutions discovered so far and guide the particles toward the optimal Pareto set later in the exploration process.
  • The ϵ -dominance relation was used to update the archive solutions. Additionally, it promoted exploitation and exploration while helping to increase diversity.
  • In aim to preserve a good exploitation, diversity, and an effective solution distribution, the crowding distance and fast nondominated sorting were used.
  • ZDT, DTLZ series test functions, and CEC 2021 RWMOPs (real-world constrained multiobjective optimization problems) were the test benchmarks to be utilized to validate the proposed MOGSK algorithm.
  • In order to further evaluate the proposed MOGSK, a comparison was conducted against different algorithms, such as MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA.
The rest of the paper is arranged according to Figure 1 as follows: Section 2 explains the basics of multiobjective optimization problems and the gaining–sharing knowledge optimization algorithm (GSK). Section 3 introduces the proposed MOGSK algorithm. The experimental results, the comparisons, and the discussion are presented in Section 4. Finally, Section 5 presents the conclusions and suggestions for future work directions.

2. Background

This section presents some useful information regarding multiobjective optimization problems (MOPs), including the definition of some concepts such as Pareto dominance. In addition, this section describes the standard single-objective-based gaining–sharing knowledge optimization algorithm (GSK).

2.1. Multiobjective Optimization Problems

Multiobjective optimization is a process where conflicting objective functions are optimized simultaneously. Depending on the problem treated, multiobjective optimization can be either minimizing or maximizing. In case of minimization, an MOP is formulated [57], as follows:
M m i n i m i z e : f m ( x ) , ( m = 1 , 2 , . . . , M ) ,
s u b j e c t t o g j ( x ) = 0 , j = ( 1 , 2 , 3 , . . . , J ) ,
h k ( x ) 0 , k = ( 1 , 2 , 3 , . . . , K )
l i x i u i , i = ( 1 , 2 , 3 , . . . , n )
where x in f m ( x ) represents the solution for n decision variables x = ( x 1 , x 2 , . . . , x n ) , while satisfying the J of g j ( x ) inequality and K of h k ( x ) equality constraints. M refers to the number of objective functions. The lower and upper boundaries of the decision variables are represented by l i and u i , respectively. In an MOP, comparing the generated solutions with relational arithmetic operators is challenging. Therefore, the Pareto optimal dominance concept offered an easy approach to compare solutions, where there is a set of solutions instead of one single solution.

2.2. Pareto Dominance

The core concept behind the Pareto dominance relation comprised:
Definition 1 (Pareto dominance).
A solution u is said to dominate another solution v (as u v ) iff:
i 1 , 2 , 3 , . . . , M : f i ( u ) f i ( v ) a n d j 1 , 2 , 3 , . . . , M : f j ( u ) < f j ( v )
M represents the number of the objective functions. A solution u is generally said to weakly dominate another solution v (noted as: u v ) iff:
i 1 , 2 , 3 , . . . , M : f i ( u ) f i ( v )
Definition 2 (A nondominated set).
The solutions that are not dominated by any other solution are said to be the nondominated set. Let A be a set of solutions; the nondominated solution is included in set A A is nondominated by any other solution in set A.
Definition 3 (Pareto optimal set).
This is the set of all of the nondominated solutions in the research space. The Pareto front refers to the Pareto optimal set illustration in the objective space.

2.3. Gaining–Sharing Knowledge Optimization Algorithm (GSK)

New optimization approaches are developed and introduced each year to solve real-world problems. Therefore, a new optimization algorithm was proposed recently, titled the gaining–sharing Knowledge optimization algorithm (GSK) [31]. GSK is a human-based algorithm that simulates knowledge gaining and sharing in the course of a human lifetime. GSK’s main mechanisms depend on two important stages: first, junior gaining–sharing knowledge; and second, senior gaining–sharing knowledge.
  • Junior gaining and sharing knowledge: in this stage, the individual tries to gain information from their small circle of people, such as family, relatives, and neighbors, since they cannot interact on a large scale such as social media. Even with a lack of experience, juniors still have the will to share their knowledge with the people they know or not; in addition, they do not have the ability yet to categorize people as bad or good, so they share due to curiosity and exploration.
  • Senior gaining and sharing knowledge: in this stage, an individual is more experienced and has a wider circle of people to interact with, such as social networks, friends, and colleagues. Therefore, they gain their knowledge from their entourage. In addition, in this phase, they have an advanced ability to categorize people into classes such as best, better, and worst. Therefore, they share knowledge with the most suitable individuals and improve their skills.
The mathematical formulation of the above-mentioned GSK process follows several steps:
  • Initialization of the necessary factors, such as N—the population size, which corresponds to the number of people. Initialization of the starting population is random while respecting the boundary constraints, where x i ( i = 1 , 2 , 3 , . . . , N ) represent the individuals; each x i corresponds to x i h with x i h ( x i 1 , x i 2 , x i 3 , . . . , x i d ) , deferring to the possible number of fields of disciplines. To rephrase, it can be seen as branch of knowledge allocated to an individual. The fitness evaluation of the population noted as f j ( j = 1 , 2 , 3 , . . . , N ) is also conducted.
  • Now, the dimension between junior and senior is decided through the following nonlinear equation:
    d ( j u n i o r ) = P r o b l e m s i z e 1 G G e n k
    d ( s e n i o r ) = P r o b l e m s i z e d ( j u n i o r )
    d ( j u n i o r ) and d ( s e n i o r ) are the dimensions of junior and senior phases. k refers to the knowledge rate ( k > 0 ) . G is the number of generations, while G e n is the maximum number of generations.
  • In this step, the junior gaining–sharing knowledge stage begins. In this stage each individual tries to gain knowledge from their small network; at the same time, they try to share their knowledge. The people that they interact with can be from their network or not, since in this phase, they are driven by curiosity.
  • Now the update of the individuals in the current stage is conducted following the junior scheme:
    Based on the values of the objective function, the individuals are sorted in ascendant order.
    For each individual, the closest best and worst are selected to gain knowledge. In addition, a random individual is selected to share knowledge.
    This step process is shown in Algorithm 1. K f is the knowledge factor, where K f > 0 ; this parameter controls the amount of knowledge (gained/shared) that is going to be added to the actual individual. k r is the knowledge ratio, where k r [ 0 , 1 ] ; this parameter controls the amount of knowledge (gained/shared) that is going to be transferred to another individual.
    Algorithm 1 Phase 1: junior gaining and sharing knowledge  [31].
    Mathematics 11 03092 i001
  • This step is the senior gaining–sharing knowledge phase. This stage takes into account a person’s capacity for classification (such as good and bad). And the scheme in this stage is as follows:
    First, the values of the objective function are used to sort the individuals in ascendant order.
    Then, those individuals are split into three groups: worst, middle, and best, for example: B e s t = 100 p % ( x b e s t ) , M i d d l e = N ( 2 100 p % ) ( x m i d d l e ) , and W o r s t = 100 p % ( x w o r s t ) .
    Now, two vectors are chosen from the best and worst for gaining (100p%), while a third vector from the middle is chosen for sharing N ( 2 100 p % ) . p here indicates the percentage of best and worst individuals, where p [ 0 , 1 ] . This step process is shown in Algorithm 2.
    Algorithm 2 Phase 2: senior gaining and sharing knowledge  [31].
    Mathematics 11 03092 i002

3. Multiobjective Gaining–Sharing Knowledge Optimization Algorithm (MOGSK)

This section describes the proposed MOGSK and its mathematical formulation. In order to pass from single-objective optimization to multiobjective optimization, several components are introduced, which include the implementation of a separate repository to store the nondominant solutions uncovered thus far. Those nondominated solutions are obtained using the Pareto dominance relation in addition to fast nondominated sorting and the crowding distance, which help with diversity and improve exploitation and exploration, while the ϵ -dominance relation is incorporated to update the archive (repository) solutions. In order to update the archive, the solutions of the current population and previous archive solutions are used in the process. At last, the archive is used to guide the population toward the Pareto optimal. To sum up, the techniques used in the proposed MOGSK are:
  • Fast nondominated sorting (FNS), in order to obtain the nondominated solutions.
  • Crowding distance, to insure the distribution and convergence of the solutions as well as to improve the diversity.
  • The archive, to preserve the best solutions so far and to act as a guide to the individual towards the Pareto optimal set.
  • The epsilon dominance relation, which is employed each iteration to update the archive’s solutions.
The introduced techniques have tremendous advantages that help MOGSK to be a good optimization algorithm. First, the archive acts as a guide for the solution toward the Pareto optimal, while preserving diversity and helping maintain the balance between exploration and exploitation. Also, The flexibility and diversity provided by the ϵ -dominance relation help to include a variety of solutions, in addition to crowding distance and fast nondominated sorting (FNS), which boost coverage and accelerate convergence toward the Pareto optimal. And lastly, the new population, which is a combination of the current population and previous archive, contributes as much to the exploitation and exploration.
After the initialization of the necessary parameters, which are shown in Table 1, comes the initialization of the first population, followed by the assessment of the fitness value for each individual. The elitist fast nondominated sorting (FNS) [9] is used on the first population. FNS can obtain the nondominated solution and sort the solution according to different fronts. Following that, there is the application of the crowding distance (CD) [9]. Once finished, the archive is initialized, with the nondominated solution obtained. The MOGSK algorithm will compute, for a predetermined number of iterations, a series of key steps, as follows: updating the population (gaining or sharing) and updating the archive.

3.1. Update Population (Gaining/Sharing)

The population’s gaining/sharing must be updated at each iteration to move towards the Pareto optimal. In addition, this population plays a huge role, as it is used in the process of updating the archive in a later step. However, the update is different than the one used in the GSK algorithm, where it selects an individual to gain knowledge from and another to share with by arranging the individuals according to their objective value. In our case, in order to obtain the set of solutions that will be used in the gaining and sharing process, we first combine the current population’s solutions and the previous archive solutions to preserve the diversity, followed by the application of FNS and crowding distance. These two techniques were considered to help boost exploitation and exploration, and a new set of solutions was founded ( N e w s o l ). This set is now used in the process of gaining and sharing. To summarize, in this phase, three key points are used: N e w s o l solutions, fast nondominated sorting (FNS), and crowding distance.

3.1.1. N e w s o l Solutions

As stated previously, in order to update the population (gaining/sharing), a new set of solutions is used. This set is obtained by combining the current population’s solutions P o p u l a t i o n i t e r a t i o n and the previous archive solutions A r c h i v e i t e r a t i o n 1 ; by carrying this out, we can ensure diversity and convergence, as well as maintaining good exploration and exploitation.
N e w s o l = P o p u l a t i o n i t e r a t i o n , A i t e r a t i o n 1

3.1.2. Fast Nondominated Sorting (FNS)

FNS [9] is employed on the N e w s o l ; now, this technique is employed since a simple comparison (using the comparison operator) to find the best solution among the obtained ones is not possible, due to the contracting objectives. FNS picks each solution from the population and evaluates its dominance over the remaining solutions. This procedure generates a first front; in order to generate the next front, the first front solutions are excluded from the population, and the procedure recurs until all the solutions are ranked and sorted according to their respected front, as illustrated in Figure 2.

3.1.3. Crowding Distance

Once the solutions are sorted using FNS, the crowding distance (CD) [9] is employed. Mainly, CD is used to keep up the distribution and diversity of the solutions. CD estimates the density around a particular solution, which means the CD is formulated by calculating the average distance between the two nearest solutions of a cuboid of a given solution, as shown in Figure 3. The mathematical formulation of CD is noted as:
C D f j i = f j ( i + 1 ) + f j ( i 1 ) f j ( m a x ) + f j ( m i n )
where f j ( i + 1 ) and f j ( i 1 ) are the objective values of the neighborhood solutions of the solution i; the objective function’s maximum and minimum values are f j m a x and f j m i n , where j is the objective function. The CD of all the solutions for all the objective functions is calculated, then the solutions are arranged in an ascending order following the values of the CD obtained.

3.2. Update Archive

Once the solutions are sorted, then comes the third component of MOGSK: the archive. The archive’s main roles are as follows: first, to keep the best solutions found so far; and second, to help in the update process of the new population. Here, we have two cases, as shown in the flowchart in Figure 4. The first case is when it is the first generation: the archive is initialized using the solutions of the first front found (of the first population) by FNS, since the first front has the best solutions. The second case is when it is not the first generation: here, the current solutions and the archive of the previous generation are combined ( N e w s o l ); once completed, the solutions of the archive are updated using this new solution combination by applying the epsilon dominance relation. The main point in the combination of those two solutions is to make sure to preserve the best solution of the previous archive and include the best solution for the new population, since in the case where only the current solution is used, there is a possibility of losing good solutions from the previous generation. In addition, the archive solution participates in orienting the solutions around the Pareto optimal. The size of the archive is managed where only the first N solutions are kept.

ϵ -Dominance

The epsilon dominance relation ( ϵ -dominance) is a known and widely used relaxed dominance relation to improve multiobjective algorithms’ efficiency. Let ϵ be a relaxation vector, where ϵ R m , with m as the number of objective function and ϵ i > 0 . For a solution a to be said to be ϵ -dominant, another solution b is noted as a ϵ b , when the f i ( a ) ϵ f i ( b ) condition is satisfied for all the objective functions ( f i ) . The mechanism of this concept is basically box-level dominance in addition to regular dominance. First, the space is divided into hyperboxes (hypercubes). Each box is identified by a unique vector B = ( B 1 , B 2 , . . . , B M ) assigned for each solution x, where M represents the number of objectives. The vector B can be identified as follows:
B i ( f ) = l o g ( f i ) l o g ( ϵ + 1 )
where . indicates the absolute value, f i the objective value of the i t h solution, and ϵ is the permissible error. Figure 5 describes a presentation of ϵ -dominance for the x solution.
ϵ -dominance is a practical technique that helps maintain diversity and convergence toward the Pareto optimal set. Also, the application of this dominance relation is simple, and can help the decision maker, who has control over the set of the achieved solutions. As shown in Algorithm 3, the mechanism of ϵ -dominance is simple. The update procedure is conducted with one solution from the N e w s o l population against all the archive solutions. First, the B vector values are computed for one solution s from N e w s o l and all the solutions of the archive, then a test is conducted to determine whether the solution s will be part of the archive or not. Here, two cases are distinguished: first, if the identification vector B s of s, dominates the identification vector B x , any solution of the archive denoted as x then s would be stored in the archive, while the solution x would be removed, and if B s does not dominate B x , then s will not be added to the archive. Secondly, if B s does not dominate and is not dominated, then a regular dominance mechanism is used, where s dominates x, then s will be added to the archive.
Algorithm 3 Updating archive solutions using ϵ -dominance.
Mathematics 11 03092 i003
Finally, MOGSK updates the senior/junior population status similarly to GSK, as shown in Algorithm 4; these procedures are repeated until the end criterion is met, which in our case is the max iteration. The whole process of the proposed MOGSK is shown in the flowchart of Figure 4. The complexity MOGSK is that of NSGAII, since the main point of it uses fast nondominated sorting and crowding distance; therefore, the complexity equals O ( M N 2 ) , with N population size, and M the set of objectives.
Algorithm 4 Multiobjective gaining–sharing knowledge optimization algorithm (MOGSK).
Mathematics 11 03092 i004

4. Results and Discussion

4.1. Experiments Setup

To validate the designed MOGSK performance, a series of different experiments were conducted. The first experiment was carried out using the ZDT [57] and DTLZ [58] test functions, which include 12 distinct test benchmarks. In the second experiment and to further assess the quality of the proposed MOGSK, the recently introduced CEC 2021 real-world constrained multiobjective optimization problems (RWMOPs) [59] were employed. A comparison was conducted between MOGSK, MOEAD [11], eMOEA [12], MOPSO [25], NSGAII [9], SPEA2 [13], KnEA [14], and GrEA [15] using the statistical findings reached from the given test functions. The experiments conducted are listed below:
  • Experiment I: ZDT, DTLZ test functions for MOPs.
  • Experiment II: CEC 2021 test problems.
The stated number of runs was set to 30 independent runs, and there were 6000 function evaluations. As for the metrics used to compare MOGSK with other algorithms, the inverted generational distance (IGD) [4] and hyper volume indicator (HV) [60] were used. IGD is a metric used for assessing the quality of approximations towards the Pareto front achieved by a multiobjective optimization algorithm. IGD is formulated as:
I G D = i = 1 n d i 2 n
where n is the number of true Pareto solutions set, and d i 2 is the Euclidean distance between the true Pareto front and the closest obtained Pareto solution. H V assesses the outcome of an optimization algorithm by simultaneously taking into consideration the proximity of the points to the Pareto front, diversity, and spread. H V is also known as the S measure. H V refers to the volume space in the objective space dominated by the Pareto front S and the r R m reference point as a bound, for all z S , z r . The H V is noted as:
H V ( S , r ) = λ m z S z ; r
where λ m refers to the m-dimensional Lebesgue measure.
Note that the platform PlatEMO [61] is used in Experiment II.

4.2. Experiment I

ZDT and DTLZ benchmark characteristics are listed in Table 2.

4.2.1. ZDT Test Results

Table 3 describes the statistical results for the IGD metric. Table 4 reports the statistical results for the HV metric. Figure 6 illustrates the obtained results. Table 3 displays the results for the best, worst, average, median, and std for MOGSK, MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA of the IGD metric. MOGSK was able to give the best results for the test function ZDT1, ZDT2, and ZDT6, where it was successful in surpassing all the comparative algorithms in performance. As for ZDT3, the best result was obtained by SPEA2, followed by NSGAII then KnEA, after which MOGSK came in fourth, followed by the rest of the comparative methods. The ZDT4 results show that the best IGD results were obtained by SPEA2 then NSGAII, followed by MOEAD, while MOGSK came in fourth place. Table 4, on the other hand, displays the outcomes attained regarding the best, worst, average, median, and std results of the HV metric, for all the test functions Z D T i ( i , i = 1 4 ) , and ZDT6 MOGSK was able to surpass all the comparative performances of the algorithms, including MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA. In addition, Figure 6 supports the statistical quantitative and qualitative results presented previously. MOGSK shows good converge and distribution for ZDT1, ZDT2, and ZDT6; as for ZDT3, MOGSK was able to converge towards three fronts of ZDT3 out of five discontinuous fronts. In ZDT4, MOGSK was stuck in a local optimum. Overall, MOGSK showed good behavior in this experiment using the ZDT test function, which shows the ability of MOGSK to be a useful optimization tool.

4.2.2. DTLZ Test Results

Table 5 reports the statistical results for the IGD metric. Table 6 reports the statistical outcomes achieved for the HV metric. Figure 7 illustrates the obtained the results. Table 5 displays the results for five measures, which are the best, worst, average, median, and std for MOGSK, MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA of the IGD metric. MOGSK in this test function series showed good results, where it was capable of topping the comparative algorithms in six out of seven test functions, which are DTLZ1, DZLZ2,DTLZ4, DTLZ5, and DTLZ6, with a gap particularly in DTLZ5 and DTLZ6; as for DTLZ3, the best results belong to SPEA2, whereas MOGSK came in last. Table 6 presents the acquired measures (best, worst, average, median, and std) results of the HV metric for the DTLZ test function. MOGSK showed good performance as well, where it was capable of surpassing the comparative algorithms in DTLZ2, DTLZ4, DTLZ5, DTLZ6 and DTLZ7; as for DTLZ1, the best result was reported to SPEA2, followed by MOEAD, NSGAII, KnEA, GrEA, and eMOEA, then MOGSK, then MOPSO. The DTLZ3 test function’s best results were yielded by SPEA2 then GrEA, MOEA, eMOEA, NSGAII, and KnEA, then MOGSK, followed by MOPSO. These findings are supported by Figure 7; it is apparent to observe that MOGSK has good solution distribution and convergence for five of the seven test problems. Overall, MOGSK achieved excellent performance.

4.3. Experiment II

During this experiment, the CEC 2021 RWMOPs test problems were used. The RWMOPs has fifty different problems, including mechanical design problems (from RWMOP1 to RWMOP21); chemical engineering problems (from RWMOP22 to RWMOP24); process, design, and synthesis problems (from RWMOP25 to RWMOP29); power electronics problems (from RWMOP30 to RWMOP35); and power system optimization problems (from RWMOP36 to RWMOP50). Table 7 displays the fifty different problems.
For this experiment, the results for the HV metric are reported in Table 8, Table 9, Table 10, Table 11 and Table 12, respectively, as well as in Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
Table 8 displays the results for the best, worst, average, median, and std for MOGSK, MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA of the HV metric, using mechanical design problems (21 problems in total), while Figure 8 shows the HV curve. For this problem, series MOGSK showed good performance, where it was capable of topping the comparative algorithms’ performance in eleven out of twenty-one problems, namely RWMOP1, RWMOP4, RWMOP5, RWMOP6, RWMOP8, RWMOP12, RWMOP13, RWMOP14, RWMOP15, RWMOP18, and RWMOP19. As for RWMOP2, the best result was reported for toMOEAD, eMOEA, MOPSO, SPEA2, and GrEA with the same results, while MOGSK came in second place by a close margin, followed by KnEA and NSGAII. For RWMOP3, the best result was yielded by NSGAII, then KnEA, while MOGSK came in third place, followed by the remainder of the comparative algorithms. In RWMOP7 and RWMOP10, the results for all the algorithms, including MOGSK, gave almost the same results overall. RWMOP9 showed a very close range of the best for MOGSK, MOPSO, NSGAII, SPEA2, and GrEA, while MOEAD, eMOEA, and KnEA did not perform well on this problem. In the RWMOP11 problem, the best results were given by eMOEA, followed by MOGSk, NSGAII, KnEA, and GrEA, with a not-so-big gap in their results, while MOEAD, MOPSO, and SPEA2 came in last. RWMOP16 showed almost the same behavior, where the best results yielded a quite close range between MOGSK, MOPSO, NSGAII, SPEA2 and KnEA, followed by MOEAD, eMOEA, and GrEA. Using RWMOP17, the best results by a large gap were reported for SPEA2, followed by MOPSO, eMOEA, and GrEA; therefore, MOGSK was in fifth place, where it was able to top MOEAD, NSGAII and KnEA. RWMOP20’s best results were reported equally for MOEAD, eMOEA, MOPSO, SPEA2, and GrEA after which came MOGSK, followed by NSGAII and KnEA. For the last test function in the mechanical design problems, RWMOP21, overall, the results were close, where the best results were reported for SPEA2, NSGAII, MOPSO, GrEA, eMOEA, and MOGSK, outperforming MOEAD and KnEA. Supporting the obtained results, in Figure 8 where the HV curves for the different test problems are shown, overall, MOGSK performed well using the mechanical design problems, where it was able to give the best results for most test problems and close to the best for the others.
Table 9 displays the results for the best, worst, average, median, and std for MOGSK, MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA of the HV metric, using chemical engineering problems (three problems in total), while Figure 9 shows the HV curve. For these test problems, MOGSK showed good behavior; in the RWMOP22 problem, the best results were given by KnEA, while all the remaining algorithms, including MOGSK, yielded the same results. A similar pattern was detected in the results of RWMOP 23, where the best results were given by MOGSK, MOEAD, eMOEA, MOPSO, SPEA2, and GrEA, while NSGAII and KnEA came in last. Similarly, in RWMOP24, the best results were yielded by NSGAII with a large gap, while the rest of the algorithms, including MOGSK, gave the same result. Overall, and using the chemical engineering problems, MOSGK gave good and constant results, as displayed in the HV curve in Figure 9.
Table 10 displays the results for the best, worst, average, median, and std for MOGSK, MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA of the HV metric, using process, design, and synthesis problems (five problems in total), while Figure 10 shows the HV curve. In this test problem series, MOGSK showed a good performance, where it was capable of topping the comparative algorithms’ performance in four out of the five test problems, namely RWMOP25, RWMOP26, RWMOP28, and RWMOP29; these results are supported by the HV curve in Figure 10, where steady values of HV were recorded. For RWMOP27, the best results were yielded by eMOEA with a considerable gap, followed by KnEA, NSGAII, SPEA2, MOPSO, MOEAD, GrEA, and then MOGSK.
Table 11 displays the results for the best, worst, average, median, and std for MOGSK, MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA of the HV metric, using the power electronics problems (six problems in total), while Figure 11 shows the HV curve for the respective test problems. For RWMOP30, the best result was reported for GrEA, followed by MOEAD, SPEA2, eMOEA, NSGAII, and KnEA, then MOGSK and MOPSO. For RWMOP31, the best result was given by MOEAD, followed by GrEA, eMOEA, SPEA2, and MOPSO, then MOGSK in fifth place, followed by KnEA and NSGAII. From RWMOP31 to RWMOP36, the best results were reported to MOEAD, while MOGSK overall was able to only outperform three or four comparative algorithms with close-ranged results. Supporting these findings, Figure 11 shows the HV curve for this problem series.
Table 12 displays the results for the best, worst, average, median, and std for MOGSK, MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA of the HV metric, using the power system optimization problems (fifteen problems in total), while Figure 12 shows the HV curve for the respective test problems. For the problems RWMOP36 to RWMOP39, all the algorithms, including MOGSK, gave the same results. RWMOP40’s best result belonged to MOPSO, followed by MOGSK, outperforming the other comparative algorithms. The best result for RWMOP41 belonged to MOPSO and MOGSK, with close-ranged results, overcoming the remaining comparative algorithms. As for RWMOP42, the top results belonged to MOEAD, followed by GrEA and then eMOEA and SPEA2, with close results, while MOGSK, MOPSO, NSGAII, and KnEA came in third place, with the same results. The obtained results for RWMOP43 showed that the best result was yielded by MOEAD, SPEA2, and GrEA, with the same results, followed by MOGSK, with close results as well. For RWMOP44 and RWMOP45, MOGSK yielded the best results, surpassing all the comparative algorithms. The RWMOP46 problem’s results showed the same pattern, while the best results with small margin were given by MOEAD and MOGSK taking over the remaining comparative algorithms. RWMOP47’s best results were obtained by MOEAD, eMOEA, MOPSO, SPEA2 GrEA, and MOGSK, and lastly came NSGAII and KnEA. The same pattern was detected in RWMOP48. RWMOP49’s obtained results show that the best was obtained by SPEA2 and MOGSK, outperforming the remaining algorithms. Lastly, the best result was shared by MOGSK, eMOEA, MOEAD, MOPSO, SPEA2, and GrEA, followed in second place by NSGAII and KnEA. Figure 12 supports the finding; overall, MOGSK performed well in this test problem series, where either it yielded the best results or gave close ones. Lastly, and by observing the results of both experiments and specially the second experiment we can confirm that MOGSK can be a good tool for optimization, as it was able to give good results with real-world problems, which reinforce the stance of MOGSK in the midst of optimization algorithms, and more specifically, evolutionary-based algorithms.

Limitation

In this work, the proposed MOGSK was tested using two different benchmarks: the first one is the ZDT, DTLZ series test functions; the second one is the CEC 2021 (real-world constrained multiobjective optimization problems). MOGSK performed well in most of the test problems; however, as with any optimization problem, it was not able to yield good results in all of them. As can be seen in the ZDT test series using ZDT4, MOGSK was stuck in a local optimum known as the Pareto front of the ZDT4, which is a concave region, which makes it difficult for algorithms to explore and converge to the real global optima. Also, due to the deceptive nature of ZDT4, it makes a local optimum more fitting. In addition, for ZDT3, even though MOGSK was able to converge, it was not able to cover all the front, which is a premature convergence, which is one of the challenges of ZDT3 (discontinuous front); this feature of ZDT3 requires an optimization algorithm to have a delicate balance between exploration and exploitation in order to conduct extensive exploration to discover these regions, while also exploiting known solutions to improve convergence. However, the results of the HV metrics show that the algorithm is good compared to the comparative algorithms, which leaves room for improvement. While for the real-world CEC 2021 test problems, and with different problems and variations, we can tell in general that MOGSK did great, in cases where the algorithm did not give the best results, it was able to yield good results, such as the case of chemical engineering problems. MOGSK performed well with mechanical design problems, where it was able to give the best results for eleven test problems out of twenty-one test problems. Similarly, for process, design, and synthesis problems, MOGSK gave 90% of the best results in comparison with other algorithms. However, power electronics problems and power system optimization problems were quite challenging for MOGSK. Power electronics include conversion, control, and conditioning; these problems’ difficulty is rather high due to different factors, one of which is nonlinearity, which causes the optimization problem to be highly nonconvex. Power system optimization, on the other hand, involves the generation, transmission, distribution, and utilization of electrical energy; this problem challenge lies in the high number of equality constraints. Therefore, for these two test series, MOGSK did not show good results. All in all, MOGSK uses a set of parameters, and this set can greatly affect the results and how well the algorithm can operate. One of the solutions that we proposed to further improve the results is adaptive parameters, which have already been proposed in a single-optimization version, but not yet in multiobjective optimization.

5. Summary and Future Work

This study presented the initial extended version of the recently introduced gaining–sharing knowledge optimization to solve multiobjective optimization issues, named MOGSK. And in order to ensure the passage from a single-objective optimization algorithm towards a multiobjective one, several strategies were adapted. Firstly, the fast nondominated solution, also known as (FNS), and crowding distance (CD) techniques were employed to obtain the nondominated solution and to preserve the distribution and diversity of the solution along the exploitation process. Secondly, an external archive was used to safeguard the best solutions found so far, and to help in the update process by guiding the solutions around the Pareto optimal. Lastly, the archive solutions were updated using the epsilon dominance relation, which helps boost convergence in the direction of the Pareto optimal front. Our proposed MOGSK was evaluated using the biobjective test functions (ZDT), which include five problems and the three-objective test functions (DTLZ), including seven problems. In addition, the CEC 2021 (RWMOPs) problems were also used; this collection accommodates a variety of problems, such as chemical engineering problems, power electronics problems, mechanical design problems, and power system optimization problems, with a total of fifty problems. The MOGSK results were compared with known algorithms, including MOEAD, eMOEA, MOPSO, NSGAII, SPEA2, KnEA, and GrEA. The obtained results proved that MOGSK is a good tool of optimization. The aim for future work consists of improving the proposed algorithm so that it can solve more optimization problems, and exploring the propensity of the proposed algorithm in resolving real-world issues.

Author Contributions

Conceptualization, N.E.C. and A.A.; Methodology, N.E.C.; Validation, H.M.; Formal analysis, H.M.; Data curation, K.A.A. and H.M.Z.; Writing—original draft, N.E.C., A.A., K.A.A., H.M.Z., H.M. and A.W.M.; Writing—review & editing, N.E.C., A.A., K.A.A., H.M.Z., H.M. and A.W.M.; Supervision, A.W.M.; Funding acquisition, K.A.A. and A.W.M. All authors have read and agreed to the published version of the manuscript.

Funding

The research is funded by the Researchers Supporting Program at King Saud University, (RSP2023R305).

Data Availability Statement

Not applicable.

Acknowledgments

The authors present their appreciation to King Saud University for funding the publication of this research through the Researchers Supporting Program (RSP2023R305), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gunantara, N. A review of multi-objective optimization: Methods and its applications. Cogent Eng. 2018, 5, 1502242. [Google Scholar] [CrossRef]
  2. Mirjalili, S.; Jangir, P.; Saremi, S. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2017, 46, 79–95. [Google Scholar] [CrossRef]
  3. Branke, J.; Deb, K.; Dierolf, H.; Osswald, M. Finding knees in multi-objective optimization. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Birmingham, UK, 18–22 September 2004; pp. 722–731. [Google Scholar]
  4. Veldhuizen, D.; Lamont, G. Multiobjective evolutionary algorithms: Analyzing the state-of-the-art. Evol. Comput. 2000, 8, 125–147. [Google Scholar] [CrossRef] [PubMed]
  5. Kim, I.; Weck, O. Adaptive weighted-sum method for bi-objective optimization: Pareto front generation. Struct. Multidiscip. Optim. 2005, 29, 149–158. [Google Scholar] [CrossRef]
  6. Nedjah, N.; Mourelle, L.M. Evolutionary multi–objective optimisation: A survey. Int. J. Bio-Inspired Comput. 2015, 7, 1–25. [Google Scholar] [CrossRef]
  7. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Available online: https://ieeexplore.ieee.org/book/6267401 (accessed on 15 June 2023).
  8. Srinivas, N.; Deb, K. Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  9. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  10. Knowles, J.; Corne, D. M-PAES: A memetic algorithm for multiobjective optimization. Congr. Evol. Comput. 2000, 1, 325–332. [Google Scholar]
  11. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  12. Deb, K.; Mohan, M.; Mishra, S. Evaluating the ϵ-domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions. Evol. Comput. 2005, 13, 501–525. [Google Scholar] [CrossRef]
  13. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm. Evol. Methods Des. Optim. Control. Appl. Ind. Probl. 2001, 103, 95–100. [Google Scholar] [CrossRef]
  14. Zhang, X.; Tian, Y.; Jin, Y. A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2014, 19, 761–776. [Google Scholar] [CrossRef]
  15. Yang, S.; Li, M.; Liu, X.; Zheng, J. A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  16. Salcedo-Sanz, S.; Manjarres, D.; Pastor-Sánchez, Á.; Ser, J.; Portilla-Figueras, J.; Gil-Lopez, S. One-way urban traffic reconfiguration using a multi-objective harmony search approach. Expert Syst. Appl. 2013, 40, 3341–3350. [Google Scholar] [CrossRef]
  17. Sadollah, A.; Eskandar, H.; Bahreininejad, A.; Kim, J. Water cycle algorithm for solving multi-objective optimization problems. Soft Comput. 2015, 19, 2587–2603. [Google Scholar] [CrossRef]
  18. Zouache, D.; Moussaoui, A.; Abdelaziz, F. A cooperative swarm intelligence algorithm for multi-objective discrete optimization with application to the knapsack problem. Eur. J. Oper. Res. 2018, 264, 74–88. [Google Scholar] [CrossRef]
  19. Kumawat, I.; Nanda, S.; Maddila, R. Multi-objective whale optimization. In Proceedings of the Tencon 2017–2017 IEEE Region 10 Conference, Penang, Malaysia, 5–8 November 2017; pp. 2747–2752. [Google Scholar]
  20. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S. A novel Whale Optimization Algorithm integrated with Nelder–Mead simplex for multi-objective optimization problems. Knowl.-Based Syst. 2021, 212, 106619. [Google Scholar] [CrossRef]
  21. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S.; Chakrabortty, R.; Ryan, M. MOEO-EED: A multi-objective equilibrium optimizer with exploration–exploitation dominance strategy. Knowl.-Based Syst. 2021, 214, 106717. [Google Scholar] [CrossRef]
  22. Wang, Z.; Li, H.; Yu, H. MOEA/UE: A novel multi-objective evolutionary algorithm using a uniformly evolving scheme. Neurocomputing 2021, 458, 535–545. [Google Scholar] [CrossRef]
  23. Wang, W.; Tian, G.; Yuan, G.; Pham, D.T. Energy-time tradeoffs for remanufacturing system scheduling using an invasive weed optimization algorithm. J. Intell. Manuf. 2021, 34, 1065–1083. [Google Scholar] [CrossRef]
  24. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95 Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  25. Coello, C.; Pulido, G.; Lechuga, M. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  26. Dorigo, M.; Caro, G. Ant colony optimization: A new meta-heuristic. Congr. Evol. Comput. 1999, 2, 1470–1477. [Google Scholar]
  27. Alaya, I.; Solnon, C.; Ghedira, K. Ant colony optimization for multi-objective optimization problems. In Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2007), Patras, Greece, 29–31 October 2007; Volume 1, pp. 450–457. [Google Scholar]
  28. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  29. Houssein, E.; Mahdy, M.; Shebl, D.; Manzoor, A.; Sarkar, R.; Mohamed, W. An efficient slime mould algorithm for solving multi-objective optimization problems. Expert Syst. Appl. 2022, 187, 115870. [Google Scholar] [CrossRef]
  30. Chalabi, N.; Attia, A.; Bouziane, A.; Hassaballah, M. An improved marine predator algorithm based on epsilon dominance and Pareto archive for multi-objective optimization. Eng. Appl. Artif. Intell. 2023, 119, 105718. [Google Scholar] [CrossRef]
  31. Mohamed, A.; Hadi, A.; Mohamed, A. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  32. Reynolds, R.G. An introduction to cultural algorithms. In Evolutionary Programming—Proceedings of the Third Annual Conference; Sebald, A.V., Fogel, L.J., Eds.; World Scientific Press: San Diego, CA, USA, 1994; pp. 131–139. [Google Scholar]
  33. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  34. Kashan, A.H. League championship algorithm: A new algorithm for numerical function optimization. In Proceedings of the 2009 International Conference of Soft Computing and Pattern recognition, Malacca, Malaysia, 4–7 December 2009; pp. 43–48. [Google Scholar]
  35. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  36. Shi, Y. Brain storm optimization algorithm. In Advances in Swarm Intelligence: Second International Conference, ICSI 2011, Chongqing, China, 12–15 June 2011, Proceedings, Part I 2; Springer: Berlin/Heidelberg, Germany, 2011; pp. 303–309. [Google Scholar]
  37. Kulkarni, A.J.; Durugkar, I.P.; Kumar, M. Cohort intelligence: A self supervised learning behavior. In Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 1396–1400. [Google Scholar]
  38. Moosavian, N.; Roodsari, B.K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 2014, 17, 14–24. [Google Scholar] [CrossRef]
  39. Huan, T.T.; Kulkarni, A.J.; Kanesan, J.; Huang, C.J.; Abraham, A. Ideology algorithm: A socio-inspired optimization methodology. Neural Comput. Appl. 2017, 28, 845–876. [Google Scholar] [CrossRef]
  40. Moghdani, R.; Salimifard, K. Volleyball Premier League Algorithm. Appl. Soft Comput. J. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  41. Khatri, A.; Gaba, A.; Rana, K.; Kumar, V. A novel life choice-based optimizer. Soft Comput. 2020, 24, 9121–9141. [Google Scholar] [CrossRef]
  42. Elsisi, M. Future search algorithm for optimization. Evol. Intell. 2019, 12, 21–31. [Google Scholar] [CrossRef]
  43. Shaheen, A.M.; Ginidi, A.R.; El-Sehiemy, R.A.; Ghoneim, S.S. A forensic-based investigation algorithm for parameter extraction of solar cell models. IEEE Access 2020, 9, 1–20. [Google Scholar] [CrossRef]
  44. Wagan, A.I.; Shaikh, M.M. A new metaheuristic optimization algorithm inspired by human dynasties with an application to the wind turbine micrositing problem. Appl. Soft Comput. 2020, 90, 106176. [Google Scholar]
  45. Emami, H. Anti-coronavirus optimization algorithm. Soft Comput. 2022, 26, 4991–5023. [Google Scholar] [CrossRef] [PubMed]
  46. Agrawal, P.; Ganesh, T.; Oliva, D.; Mohamed, A. S-shaped and V-shaped gaining-sharing knowledge-based algorithm for feature selection. Appl. Intell. 2022, 52, 81–112. [Google Scholar] [CrossRef]
  47. Xiong, G.; Yuan, X.; Mohamed, A.; Chen, J.; Zhang, J. Improved binary gaining-sharing knowledge-based algorithm with mutation for fault section location in distribution networks. J. Comput. Des. Eng. 2022, 9, 393–405. [Google Scholar] [CrossRef]
  48. Agrawal, P.; Ganesh, T.; Mohamed, A. Solving knapsack problems using a binary gaining sharing knowledge-based optimization algorithm. Complex Intell. Syst. 2022, 8, 43–63. [Google Scholar] [CrossRef]
  49. Li, C. Takagi–Sugeno fuzzy based power system fault section diagnosis models via genetic learning adaptive GSK algorithm. Knowl.-Based Syst. 2022, 255, 109773. [Google Scholar] [CrossRef]
  50. Ortega-Sánchez, N. Identification of apple diseases in digital images by using the Gaining-sharing knowledge-based algorithm for multilevel thresholding. Soft Comput. 2022, 26, 2587–2623. [Google Scholar] [CrossRef]
  51. Hassan, S.; Agrawal, P.; Ganesh, T.; Mohamed, A. A Novel Discrete Binary Gaining-Sharing Knowledge-Based Optimization Algorithm for the Travelling Counselling Problem for Utilization of Solar Energy. Int. J. Swarm Intell. Res. 2022, 13, 1–24. [Google Scholar] [CrossRef]
  52. Xiong, G.; Li, L.; Mohamed, A.; Yuan, X.; Zhang, J. A new method for parameter extraction of solar photovoltaic models using gaining–sharing knowledge based algorithm. Energy Rep. 2021, 7, 3286–3301. [Google Scholar] [CrossRef]
  53. Mohamed, A.; Hadi, A.; Mohamed, A.; Awad, N. Evaluating the Performance of Adaptive GainingSharing Knowledge Based Algorithm on CEC 2020 Benchmark Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation, CEC 2020, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
  54. Mohamed, A.; Abutarboush, H.; Hadi, A.; Mohamed, A. Gaining-sharing knowledge based algorithm with adaptive parameters for engineering optimization. IEEE Access 2021, 9, 65934–65946. [Google Scholar] [CrossRef]
  55. Wolpert, D.; Macready, W. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  56. Rai, R.; Das, A.; Ray, S.; Dhal, K.G. Human-inspired optimization algorithms: Theoretical foundations, algorithms, open-research issues and application for multi-level thresholding. Arch. Comput. Methods Eng. 2022, 29, 5313–5352. [Google Scholar] [CrossRef]
  57. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [Green Version]
  58. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable multi-objective optimization test problems. Congr. Evol. Comput. 2002, 1, 825–830. [Google Scholar]
  59. Kumar, A. A benchmark-suite of real-world constrained multi-objective optimization problems and some baseline results. Swarm Evol. Comput. 2021, 67, 100961. [Google Scholar] [CrossRef]
  60. Zitzler, E. Evolutionary algorithms for multiobjective optimization. Methods Appl. 1999, 63, 1–134. [Google Scholar]
  61. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization[Educational Forum. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef] [Green Version]
  62. Kannan, B.; Kramer, S. An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. In Proceedings of the ASME Design Engineering Technical Conference, Albuquerque, NM, USA, 19–22 September 1993; Volume Part F1679, pp. 103–112. [Google Scholar] [CrossRef]
  63. Narayanan, S.; Azarm, S. On improving multiobjective genetic algorithms for design optimization. Struct. Optim. 1999, 18, 146–155. [Google Scholar] [CrossRef]
  64. Chiandussi, G.; Codegone, M.; Ferrero, S.; Varesio, F. Comparison of multi-objective optimization methodologies for engineering applications. Comput. Math. Appl. 2012, 63, 912–942. [Google Scholar] [CrossRef] [Green Version]
  65. Deb, K. Evolutionary Algorithms for in Engineering Design. Evol. Algorithms Eng. Comput. Sci. 1999, 2, 135–161. [Google Scholar]
  66. Osyczka, A.; Kundu, S. A Genetic Algorithm-Based Multicriteria Optimization Method. In Proceedings of the First World Congress of Structural and Multidisciplinary Optimization, Goslar, Germany, 28 May–2 June 1995; pp. 909–914. [Google Scholar]
  67. Azarm, S.; Tits, A.; Fan, M. Tradeoff-driven optimization-based design of mechanical systems. In Proceedings of the 4th Symposium on Multidisciplinary Analysis and Optimization, Atlanta, GA, USA, 4–6 September 1999; p. 4758. [Google Scholar]
  68. Ray, T.; Liew, K. A swarm metaphor for multiobjective design optimization. Eng. Optim. 2002, 34, 141–153. [Google Scholar] [CrossRef]
  69. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  70. Cheng, F.; Li, X. Generalized center method for multiobjective engineering optimization. Eng. Optim. 1999, 31, 641–661. [Google Scholar] [CrossRef]
  71. Huang, H.; Gu, Y.; Du, X. An interactive fuzzy multi-objective optimization method for engineering design. Eng. Appl. Artif. Intell. 2006, 19, 451–460. [Google Scholar] [CrossRef]
  72. Steven, G. Evolutionary Algorithms for Single and Multicriteria Design Optimization; Osyczka, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2002; Volume 24. [Google Scholar]
  73. Coello, C.; Lamont, G.; Veldhuizen, D. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: Berlin/Heidelberg, Germany, 2007; Volume 5. [Google Scholar] [CrossRef]
  74. Parsons, M.; Scott, R. Formulation of Multicriterion Design Optimization Problems for Solution with Scalar Numerical Optimization Methods. J. Ship Res. 2004, 48, 61–76. [Google Scholar] [CrossRef]
  75. Fan, L.; Yoshino, T.; Xu, T.; Lin, Y.; Liu, H. A Novel Hybrid Algorithm for Solving Multiobjective Optimization Problems with Engineering Applications. Math. Probl. Eng. 2018, 2018, 5316379. [Google Scholar] [CrossRef] [Green Version]
  76. Dhiman, G.; Kumar, V. Multi-objective spotted hyena optimizer: A Multi-objective optimization algorithm for engineering problems. Knowl.-Based Syst. 2018, 150, 175–197. [Google Scholar] [CrossRef]
  77. Mahon, K.; Siddall, J. Optimal Engineering Design: Principles and Applications (Mechanical Engineering Series; CRC Press: Boca Raton, FL, USA, 1983; Volume 34. [Google Scholar] [CrossRef]
  78. Zhang, H.; Peng, Y.; Hou, L.; Tian, G.; Li, Z. A hybrid multi-objective optimization approach for energy-absorbing structures in train collisions. Inf. Sci. 2019, 481, 491–506. [Google Scholar] [CrossRef]
  79. Floudas, C. A Collection of Test Problems for Constrained Global Optimization Algorithms. Available online: https://www.amazon.com/Collection-Problems-Constrained-Optimization-Algorithms/dp/3540530320 (accessed on 5 May 2021).
  80. Ryoo, H.; Sahinidis, N. Global optimization of nonconvex NLPs and MINLPs with applications in process design. Comput. Chem. Eng. 1995, 19, 551–566. [Google Scholar] [CrossRef]
  81. Guillén-Gosálbez, G. A novel MILP-based objective reduction method for multi-objective optimization: Application to environmental problems. Comput. Chem. Eng. 2011, 35, 1469–1477. [Google Scholar] [CrossRef]
  82. Kocis, G.; Grossmann, I. A modelling and decomposition strategy for the minlp optimization of process flowsheets. Comput. Chem. Eng. 1989, 13, 797–819. [Google Scholar] [CrossRef]
  83. Kocis, G.; Grossmann, I. Global Optimization of Nonconvex Mixed-Integer Nonlinear Programming (Minlp) Problems in Process Synthesis. Ind. Eng. Chem. Res. 1988, 27, 1407–1421. [Google Scholar] [CrossRef]
  84. Floudas, C. Nonlinear and Mixed-Integer Optimization: Fundamentals and Applications; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  85. Rathore, A.; Holtz, J.; Boller, T. Synchronous optimal pulsewidth modulation for low-switching-frequency control of medium-voltage multilevel inverters. IEEE Trans. Ind. Electron. 2010, 57, 2374–2381. [Google Scholar] [CrossRef]
  86. Rathore, A.; Holtz, J.; Boller, T. Optimal pulsewidth modulation of multilevel inverters for low switching frequency control of medium voltage high power industrial AC drives. In Proceedings of the 2010 IEEE Energy Conversion Congress and Exposition, ECCE 2010, Atlanta, GA, USA, 12–16 September 2010. [Google Scholar] [CrossRef]
  87. Edpuganti, A.; Rathore, A. Fundamental Switching Frequency Optimal Pulsewidth Modulation of Medium-Voltage Cascaded Seven-Level Inverter. IEEE Trans. Ind. Appl. 2015, 51, 3485–3492. [Google Scholar] [CrossRef]
  88. Edpuganti, A.; Dwivedi, A.; Rathore, A.; Srivastava, R. Optimal pulsewidth modulation of cascade nine-level (9L) inverter for medium voltage high power industrial AC drives. In Proceedings of the IECON 2015—41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 4259–4264. [Google Scholar] [CrossRef]
  89. Edpuganti, A.; Rathore, A. Optimal pulsewidth modulation for common-mode voltage elimination scheme of medium-voltage modular multilevel converter-fed open-end stator winding induction motor drives. IEEE Trans. Ind. Electron. 2017, 64, 848–856. [Google Scholar] [CrossRef]
  90. Mishra, S.; Kumar, A.; Singh, D.; Misra, R. Butterfly Optimizer for Placement and Sizing of Distributed Generation for Feeder Phase Balancing. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2019; Volume 799, pp. 519–530. [Google Scholar] [CrossRef]
  91. Biswas, P.; Suganthan, P.; Mallipeddi, R.; Amaratunga, G. Multi-objective optimal power flow solutions using a constraint handling technique of evolutionary algorithms. Soft Comput. 2020, 24, 2999–3023. [Google Scholar] [CrossRef]
  92. Kumar, A.; Das, S.; Mallipeddi, R. An Inversion-Free Robust Power-Flow Algorithm for Microgrids. IEEE Trans. Smart Grid 2021, 12, 2844–2859. [Google Scholar] [CrossRef]
  93. Kumar, A.; Jha, B.; Das, S.; Mallipeddi, R. Power Flow Analysis of Islanded Microgrids: A Differential Evolution Approach. IEEE Access 2021, 9, 61721–61738. [Google Scholar] [CrossRef]
  94. Jha, B.; Kumar, A.; Dheer, D.; Singh, D.; Misra, R. A modified current injection load flow method under different load model of EV for distribution system. Int. Trans. Electr. Energy Syst. 2020, 30, 12284. [Google Scholar] [CrossRef]
  95. Kumar, A.; Jha, B.; Singh, D.; Misra, R. A New Current Injection Based Power Flow Formulation. Electr. Power Compon. Syst. 2020, 48, 268–280. [Google Scholar] [CrossRef]
  96. Kumar, A.; Jha, B.; Dheer, D.; Singh, D.; Misra, R. Nested backward/forward sweep algorithm for power flow analysis of droop regulated islanded microgrids. IET Gener. Transm. Distrib. 2019, 13, 3086–3095. [Google Scholar] [CrossRef]
  97. Kumar, A.; Jha, B.; Singh, D.; Misra, R. Current injection-based Newton–Raphson power-flow algorithm for droop-based islanded microgrids. IET Gener. Transm. Distrib. 2019, 13, 5271–5283. [Google Scholar] [CrossRef]
  98. Kumar, A.; Jha, B.; Dheer, D.; Misra, R.; Singh, D. A Nested-Iterative Newton-Raphson based Power Flow Formulation for Droop-based Islanded Microgrids. Electr. Power Syst. Res. 2020, 180, 106131. [Google Scholar] [CrossRef]
  99. Rivas-Dávalos, F.; Irving, M. An approach based on the strength pareto evolutionary algorithm 2 for power distribution system planning. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3410, pp. 707–720. [Google Scholar] [CrossRef]
Figure 1. Paper organization.
Figure 1. Paper organization.
Mathematics 11 03092 g001
Figure 2. Nondominated sorting illustration.
Figure 2. Nondominated sorting illustration.
Mathematics 11 03092 g002
Figure 3. Crowding distance [9].
Figure 3. Crowding distance [9].
Mathematics 11 03092 g003
Figure 4. MOGSK flowchart.
Figure 4. MOGSK flowchart.
Mathematics 11 03092 g004
Figure 5. An illustration of ϵ -dominance for a solution x, with f 1 and f 2 objective functions and ϵ i as the tolerance level.
Figure 5. An illustration of ϵ -dominance for a solution x, with f 1 and f 2 objective functions and ϵ i as the tolerance level.
Mathematics 11 03092 g005
Figure 6. Pareto front obtained by MOGSK of ZDT test functions.
Figure 6. Pareto front obtained by MOGSK of ZDT test functions.
Mathematics 11 03092 g006
Figure 7. Pareto front generated by MOGSK of DTLZ test problems.
Figure 7. Pareto front generated by MOGSK of DTLZ test problems.
Mathematics 11 03092 g007
Figure 8. Mechanical design problems’ HV value curves.
Figure 8. Mechanical design problems’ HV value curves.
Mathematics 11 03092 g008
Figure 9. Chemical engineering problems’ HV value curves.
Figure 9. Chemical engineering problems’ HV value curves.
Mathematics 11 03092 g009
Figure 10. Process, design, and synthesis problems’ HV value curves.
Figure 10. Process, design, and synthesis problems’ HV value curves.
Mathematics 11 03092 g010
Figure 11. Power electronics problems’ HV value curves.
Figure 11. Power electronics problems’ HV value curves.
Mathematics 11 03092 g011
Figure 12. Power system optimization problems HV value curves.
Figure 12. Power system optimization problems HV value curves.
Mathematics 11 03092 g012
Table 1. MOGSK parameters.
Table 1. MOGSK parameters.
Algorithm Parameters
NPopulation size (number of individuals) = 100
kKnowledge rate ( k > 0 ) = 10
k r Knowledge ratio ( k r [ 0 , 1 ] ) = 0.1
K f Knowledge factor ( K f > 0 ) = 0.9
R u n n o Number of runs = 30 independent runs
M a x f e Maximum number of function evaluation = 60,000
Table 2. Multiobjective test function characteristics.
Table 2. Multiobjective test function characteristics.
Biobjective Test Functions
FunctionDescription
ZDT1Has a convex front
ZDT2Nonconvex front
ZDT3Has a discontinuous front
ZDT4Has 221 local Pareto optimal fronts, as results highly multimodal
ZDT6Has a nonuniform search space
Three-Objective Test Functions
FunctionDescription
DTLZ1Has a linear Pareto optimal front (POF)
DTLZ2Has a spherical POF
DTLZ3Has many POFs
DTLZ4The POF has a dense set of solutions to exist near the f M f 1
DTLZ5This problem will verify the ability to converge to a degenerated curve.
DTLZ62M-1 disconnected Pareto optimal front.
DTLZ7Has a POF that combines straight line and a hyperplane.
Table 3. IGD results on ZDT.
Table 3. IGD results on ZDT.
AlgorithmMOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
ZDT1
Best1.88E-044.16E-032.48E-023.50E-014.42E-033.81E-033.07E-026.34E-03
Worst2.59E-046.27E-033.34E-021.25E+005.41E-034.15E-033.74E-011.36E-02
Average2.13E-044.81E-032.92E-027.42E-014.76E-033.96E-031.77E-017.36E-03
median2.12E-044.70E-032.94E-027.30E-014.71E-033.95E-031.67E-017.14E-03
Std1.56E-054.67E-042.17E-032.23E-012.17E-047.50E-059.36E-021.29E-03
ZDT2
Best1.83E-044.53E-032.68E-022.29E-024.55E-033.84E-035.87E-027.90E-03
Worst1.24E-037.57E-033.73E-022.48E+005.34E-034.06E-031.26E-018.05E-03
Average2.38E-045.42E-033.16E-021.68E+004.82E-033.94E-039.62E-028.00E-03
median2.05E-045.22E-033.10E-021.77E+004.78E-033.94E-039.79E-028.01E-03
Std1.89E-047.45E-043.15E-035.66E-011.82E-045.10E-051.83E-023.56E-05
ZDT3
Best8.01E-031.22E-024.03E-022.19E-015.11E-034.70E-037.22E-031.15E-02
Worst9.33E-034.31E-029.07E-021.00E+006.47E-035.07E-033.92E-021.60E-02
Average8.97E-031.97E-026.62E-026.47E-015.47E-034.91E-031.09E-021.42E-02
median9.06E-031.37E-026.50E-026.29E-015.41E-034.92E-031.00E-021.41E-02
Std2.30E-041.14E-029.94E-031.95E-012.71E-048.91E-055.50E-031.17E-03
ZDT4
Best6.57E-024.69E-032.62E-027.88E+004.39E-033.82E-031.32E-017.23E-02
Worst2.53E-011.20E-023.62E-023.47E+014.93E-035.10E-033.74E-015.24E-01
Average1.53E-017.76E-033.02E-021.57E+014.64E-034.06E-032.59E-013.11E-01
median1.55E-017.20E-033.02E-021.55E+014.62E-033.94E-032.65E-013.08E-01
Std5.57E-021.90E-032.28E-036.70E+001.53E-042.96E-045.94E-021.31E-01
ZDT6
Best1.53E-043.36E-032.49E-025.20E-033.49E-033.05E-035.02E-035.67E-03
Worst1.13E-035.81E-033.18E-025.51E+003.90E-033.14E-031.41E-026.18E-03
Average2.60E-044.68E-032.90E-021.90E-013.68E-033.09E-037.22E-036.02E-03
median1.82E-044.70E-032.93E-026.64E-033.65E-033.09E-036.41E-036.03E-03
Std1.96E-045.93E-041.86E-031.00E+001.07E-042.22E-052.15E-031.29E-04
Table 4. HV results on ZDT.
Table 4. HV results on ZDT.
AlgorithmMOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
ZDT1
Best4.44E+007.19E-010.00E+003.22E-017.20E-017.20E-017.00E-017.17E-01
Worst1.93E-017.17E-013.22E-010.00E+007.18E-017.20E-014.95E-017.09E-01
Average5.56E-017.18E-018.75E-028.75E-027.19E-017.20E-016.16E-017.16E-01
median2.55E-017.18E-015.92E-025.92E-027.19E-017.20E-016.24E-017.16E-01
Std1.04E+005.85E-049.49E-029.49E-022.98E-041.28E-045.52E-021.45E-03
ZDT2
Best4.61E+014.43E-010.00E+004.10E-014.44E-014.45E-013.90E-014.42E-01
Worst2.60E-014.36E-014.10E-010.00E+004.44E-014.45E-013.31E-014.41E-01
Average9.91E+004.41E-011.40E-021.40E-024.44E-014.45E-013.56E-014.41E-01
median5.23E+004.42E-010.00E+000.00E+004.44E-014.45E-013.54E-014.41E-01
Std1.00E+011.68E-037.49E-027.49E-022.07E-047.92E-051.59E-024.11E-05
ZDT3
Best1.77E+006.89E-011.33E-024.77E-016.00E-016.00E-016.87E-015.98E-01
Worst1.39E-015.83E-014.77E-011.33E-025.99E-015.99E-015.97E-015.96E-01
Average2.38E-016.15E-011.60E-011.60E-015.99E-016.00E-016.01E-015.97E-01
median1.76E-015.98E-011.44E-011.44E-015.99E-016.00E-015.98E-015.97E-01
Std2.93E-013.58E-021.16E-011.16E-011.16E-045.90E-051.63E-024.36E-04
ZDT4
Best7.46E-017.17E-010.00E+000.00E+007.20E-017.20E-016.43E-016.56E-01
Worst5.34E-017.07E-010.00E+000.00E+007.18E-017.17E-014.93E-013.85E-01
Average6.51E-017.12E-010.00E+000.00E+007.19E-017.20E-015.67E-015.29E-01
median6.38E-017.13E-010.00E+000.00E+007.19E-017.20E-015.65E-015.34E-01
Std4.80E-022.83E-030.00E+000.00E+004.36E-048.59E-043.64E-028.34E-02
ZDT6
Best4.30E+003.88E-010.00E+003.86E-013.89E-013.89E-013.87E-013.86E-01
Worst4.20E-013.84E-013.86E-010.00E+003.88E-013.89E-013.78E-013.86E-01
Average2.89E+003.85E-013.69E-013.69E-013.88E-013.89E-013.85E-013.86E-01
median3.26E+003.85E-013.82E-013.82E-013.88E-013.89E-013.86E-013.86E-01
Std1.36E+001.01E-036.98E-026.98E-029.79E-052.52E-052.11E-031.33E-04
Table 5. IGD results on DTLZ.
Table 5. IGD results on DTLZ.
AlgorithmMOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
DTLZ1
Best8.77E-032.06E-023.39E-027.33E-012.57E-021.99E-022.37E-022.36E-02
Worst4.82E-022.09E-024.24E-021.17E+012.99E-022.08E-021.44E-013.36E-01
Average2.03E-022.06E-023.67E-025.96E+002.73E-022.02E-025.58E-028.59E-02
median1.60E-022.06E-023.66E-025.82E+002.75E-022.02E-024.30E-027.19E-02
Std1.09E-028.00E-051.52E-032.52E+009.78E-041.63E-043.18E-026.84E-02
DTLZ2
Best1.00E-035.45E-026.07E-021.21E-016.48E-025.26E-026.31E-026.27E-02
Worst1.16E-035.45E-026.72E-022.40E-017.46E-025.57E-027.43E-026.65E-02
Average1.09E-035.45E-026.46E-021.69E-016.94E-025.43E-026.67E-026.39E-02
median1.09E-035.45E-026.49E-021.67E-016.95E-025.42E-026.59E-026.38E-02
Std4.00E-053.72E-071.42E-032.66E-022.03E-036.47E-042.88E-037.78E-04
DTLZ3
Best2.77E-015.47E-026.87E-021.63E+006.45E-025.31E-026.60E-026.38E-02
Worst6.31E-011.06E-018.22E-011.72E+027.68E-026.60E-022.03E-015.34E-01
Average4.30E-016.22E-021.09E-016.74E+017.13E-025.60E-021.01E-011.17E-01
median4.38E-015.92E-027.87E-025.93E+017.15E-025.49E-028.81E-026.80E-02
Std9.42E-029.96E-031.36E-014.73E+013.17E-032.98E-033.34E-021.06E-01
DTLZ4
Best2.36E-035.45E-026.51E-021.20E-016.35E-025.39E-026.06E-026.42E-02
Worst5.44E-039.46E-015.53E-019.50E-017.12E-029.46E-019.46E-019.46E-01
Average3.92E-032.57E-011.96E-013.14E-016.71E-022.46E-011.24E-012.36E-01
median4.10E-035.45E-026.74E-022.71E-016.71E-025.51E-026.49E-026.73E-02
Std8.39E-043.11E-012.18E-011.89E-012.02E-032.66E-012.23E-012.80E-01
DTLZ5
Best8.80E-053.38E-025.30E-028.41E-035.30E-034.17E-037.61E-032.00E-02
Worst2.04E-043.39E-027.20E-022.20E-027.20E-034.67E-031.41E-022.44E-02
Average1.23E-043.39E-026.70E-021.21E-025.87E-034.41E-039.41E-032.15E-02
median1.14E-043.39E-026.80E-021.16E-025.84E-034.41E-039.12E-032.13E-02
Std2.73E-052.72E-054.55E-032.74E-033.79E-041.28E-041.32E-039.55E-04
DTLZ6
Best2.24E-033.39E-025.85E-026.18E-015.48E-034.03E-034.38E-032.19E-02
Worst3.38E-023.39E-026.59E-024.40E+006.78E-034.19E-035.95E-032.23E-02
Average5.66E-033.39E-026.27E-022.48E+005.92E-034.09E-034.89E-032.23E-02
median3.78E-033.39E-026.29E-022.27E+005.87E-034.09E-034.86E-032.23E-02
Std6.09E-031.23E-051.86E-031.16E+002.74E-044.00E-053.41E-049.20E-05
DTLZ7
Best8.53E-041.50E-015.76E-025.37E-017.00E-025.77E-025.79E-027.65E-02
Worst1.30E-038.03E-018.14E-015.38E+008.76E-023.46E-013.52E-013.73E-01
Average1.07E-031.77E-011.48E-012.85E+007.77E-027.90E-027.54E-029.36E-02
median1.10E-031.55E-016.21E-022.81E+007.84E-026.01E-026.60E-028.42E-02
Std1.11E-041.18E-011.77E-011.28E+004.38E-037.24E-025.23E-025.29E-02
Table 6. HV results on DTLZ.
Table 6. HV results on DTLZ.
AlgorithmMOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
DTLZ1
Best5.67E-018.42E-017.77E-010.00E+008.28E-018.43E-018.21E-018.13E-01
Worst3.20E-018.38E-016.93E-010.00E+008.16E-018.38E-015.65E-012.41E-01
Average4.76E-018.41E-017.27E-010.00E+008.23E-018.41E-017.39E-016.79E-01
median4.97E-018.41E-017.27E-010.00E+008.23E-018.42E-017.54E-016.97E-01
Std7.12E-027.62E-041.71E-020.00E+003.13E-031.25E-035.90E-021.27E-01
DTLZ2
Best9.74E+005.60E-015.50E-014.12E-015.38E-015.57E-015.48E-015.60E-01
Worst2.72E-015.60E-015.42E-012.85E-015.26E-015.53E-015.32E-015.57E-01
Average1.12E+005.60E-015.46E-013.50E-015.32E-015.56E-015.44E-015.58E-01
median3.24E-015.60E-015.46E-013.48E-015.33E-015.56E-015.45E-015.58E-01
Std2.16E+005.63E-062.00E-033.13E-023.34E-031.10E-033.39E-037.19E-04
DTLZ3
Best2.54E-015.56E-015.44E-010.00E+005.37E-015.61E-015.36E-015.58E-01
Worst2.52E-014.52E-011.48E-030.00E+004.88E-015.15E-013.97E-013.24E-01
Average2.54E-015.30E-014.90E-010.00E+005.20E-015.46E-015.00E-015.10E-01
median2.54E-015.36E-015.07E-010.00E+005.22E-015.49E-015.13E-015.49E-01
Std2.93E-042.19E-029.43E-020.00E+001.23E-021.03E-023.62E-027.46E-02
DTLZ4
Best9.83E-015.60E-015.54E-014.81E-015.40E-015.57E-015.51E-015.60E-01
Worst8.25E-019.09E-023.07E-018.36E-025.20E-019.09E-029.09E-029.09E-02
Average9.31E-014.62E-014.86E-013.86E-015.34E-014.70E-015.15E-014.74E-01
median9.42E-015.60E-015.49E-014.20E-015.35E-015.54E-015.45E-015.59E-01
Std3.88E-021.56E-011.08E-011.05E-014.53E-031.22E-011.15E-011.42E-01
DTLZ5
Best5.42E+041.82E-011.71E-011.95E-012.00E-012.00E-011.96E-011.89E-01
Worst5.40E+031.82E-011.63E-011.74E-011.98E-011.99E-011.89E-011.88E-01
Average2.07E+041.82E-011.69E-011.90E-011.99E-012.00E-011.94E-011.88E-01
median1.46E+041.82E-011.69E-011.91E-011.99E-012.00E-011.94E-011.88E-01
Std1.29E+041.49E-051.78E-034.54E-032.30E-041.76E-041.52E-033.50E-04
DTLZ6
Best5.86E-011.82E-011.78E-010.00E+002.00E-012.00E-012.00E-011.88E-01
Worst4.61E-011.82E-011.76E-010.00E+001.99E-012.00E-011.98E-011.88E-01
Average5.19E-011.82E-011.77E-010.00E+001.99E-012.00E-012.00E-011.88E-01
median5.21E-011.82E-011.77E-010.00E+001.99E-012.00E-012.00E-011.88E-01
Std3.91E-026.44E-066.29E-040.00E+001.17E-044.09E-053.04E-041.68E-05
DTLZ7
Best4.85E+032.58E-012.70E-011.86E-012.73E-012.78E-012.79E-012.75E-01
Worst1.82E+022.02E-011.93E-010.00E+002.65E-012.43E-012.41E-012.31E-01
Average1.80E+032.54E-012.56E-011.18E-022.68E-012.75E-012.76E-012.70E-01
median1.87E+032.56E-012.65E-010.00E+002.68E-012.77E-012.78E-012.69E-01
Std1.35E+039.85E-031.86E-024.05E-021.88E-038.51E-036.77E-037.95E-03
Table 7. Real-world constrained multiobjective optimization problems.
Table 7. Real-world constrained multiobjective optimization problems.
NameProblem
Mechanical Design Problems
RWMOP1Design of Pressure Vessels [62]
RWMOP2Design of Vibrating Platform [63]
RWMOP3Design of Two-Bar Truss [64]
RWMOP4Design of Welded Beam [65]
RWMOP5Disc Brake Design [66]
RWMOP6Speed Reducer Design [67]
RWMOP7Gear Train Design [68]
RWMOP8Car Side Impact Design [69]
RWMOP9Four-Bar Plane Truss [70]
RWMOP10Two-Bar Plane Truss
RWMOP11Water Resource Management
RWMOP12Simply Supported I-beam Design [71]
RWMOP13Gear Box Design
RWMOP14Multiple-Disk Clutch Brake Design [72]
RWMOP15Spring Design [62]
RWMOP16Cantilever Beam Design [73]
RWMOP17Bulk Carrier Design [74]
RWMOP18Front Rail Design [75]
RWMOP19Multiproduct Batch Plant [76]
RWMOP20Hydrostatic Thrust Bearing Design [77]
RWMOP21Crash Energy Management for High-Speed Train Problem [78]
Chemical Engineering Problems
RWMOP22Problem of Haverly’s Pooling Test [79]
RWMOP23Reactor Network Design [80]
RWMOP24Heat Exchanger Network Design [81]
Process, Design, and Synthesis Problems
RWMOP25Process Synthesis Problem [82]
RWMOP26Process Synthesis, and Design Problem [83]
RWMOP27Process Flow Sheeting Problem [84]
RWMOP28Two-Reactor Problem [82]
RWMOP29Process Synthesis Problem [82]
Power Electronics Problems
RWMOP30The problem of Synchronous Optimal Pulse-Width Modulation of 3-Level Inverters [85]
RWMOP31The problem of Synchronous Optimal Pulse-Width Modulation of 5-Level Inverters [86]
RWMOP32The problem of Synchronous Optimal Pulse-Width Modulation of 7-Level Inverters [87]
RWMOP33The problem of Synchronous Optimal Pulse-Width Modulation of 9-Level Inverters [88]
RWMOP34The problem of Synchronous Optimal Pulse-Width Modulation of 11-Level inverters [89]
RWMOP35Synchronous Optimal Pulse-Width Modulation of 13-Level Inverters [89]
Power System Optimization Problems
RWMOP36The Problem of Optimal Sizing of Single-Phase Distribution Generation with Reactive
Power Support for Phase Balancing at Main Transformer/Grid and Reducing Active Power Loss [90]
RWMOP37The Problem of Optimal Sizing of Single-Phase Distribution Generation with Reactive
Power Support for Phase Balancing at Main Transformer/Grid and Reducing Reactive Power Loss [90]
RWMOP38The Problem of Optimal Sizing of Single-Phase Distribution
Generation with Reactive Power Support for Reducing Active and Reactive Power Loss [90]
RWMOP39The Problem of Optimal Sizing of Single-Phase Distribution Generation with
Reactive Power Support for Phase Balancing at Main
Transformer/Grid and Reducing Active and Reactive Power Loss [90]
RWMOP40The Problem of Optimal Power Flow for Reducing
Active and Reactive Power Loss [91]
RWMOP41The Problem of Optimal Power Flow for Reducing
Voltage Deviation, Active and Reactive Power Loss [92]
RWMOP42The Problem of Optimal Power Flow for
Reducing Voltage Deviation and Active Power Loss [93]
RWMOP43The Problem of Optimal Power Flow for
Reducing Fuel Cost and Active Power Loss [94]
RWMOP44Optimal Power Flow for reducing Fuel Cost,
Active and Reactive Power Loss [95]
RWMOP45Optimal Power Flow for Reducing Fuel Cost,
Voltage Deviation, and Active Power Loss [91]
RWMOP46Optimal Power Flow for Minimizing Fuel Cost,
Voltage Deviation, Active and Reactive Power Loss [91]
RWMOP47The Problem of Optimal Droop Setting for
Reducing Active and Reactive Power Loss [96]
RWMOP48The Problem of Optimal Droop Setting for
Reducing Voltage Deviation and Active Power Loss [97]
RWMOP49The Problem of Optimal Droop Setting for
Reducing Voltage Deviation, Active and Reactive Power Loss [98]
RWMOP50Power Distribution System Planning [99]
Table 8. HV results using the mechanical design problems.
Table 8. HV results using the mechanical design problems.
Algorithm
Problem
MOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
RWMOP1
Best1.00E+009.89E-011.00E+009.97E-016.06E-019.99E-016.31E-019.89E-01
Worst1.00E+009.89E-019.89E-019.89E-016.04E-019.89E-015.82E-019.89E-01
Average1.00E+009.89E-019.95E-019.91E-016.05E-019.93E-015.93E-019.89E-01
median1.00E+009.89E-019.95E-019.90E-016.05E-019.93E-015.90E-019.89E-01
Std2.94E-061.13E-163.69E-032.54E-035.06E-043.49E-039.71E-031.13E-16
RWMOP2
Best9.84E-011.00E+001.00E+001.00E+003.93E-011.00E+003.94E-011.00E+00
Worst3.90E-011.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Average6.96E-011.00E+001.00E+001.00E+002.70E-011.00E+002.62E-011.00E+00
median6.48E-011.00E+001.00E+001.00E+003.03E-011.00E+002.74E-011.00E+00
Std1.87E-010.00E+000.00E+000.00E+001.40E-011.00E+001.32E-010.00E+00
RWMOP3
Best6.62E-010.00E+004.73E-010.00E+009.02E-013.42E-018.99E-016.03E-01
Worst0.00E+000.00E+000.00E+000.00E+009.02E-010.00E+008.57E-010.00E+00
Average1.10E-010.00E+001.58E-020.00E+009.02E-014.55E-028.87E-011.98E-01
median0.00E+000.00E+000.00E+000.00E+009.02E-010.00E+008.89E-010.00E+00
Std2.51E-010.00E+008.64E-020.00E+001.55E-048.43E-029.96E-032.69E-01
RWMOP4
Best8.74E-010.00E+000.00E+007.90E-018.63E-011.54E-017.93E-016.32E-01
Worst8.69E-010.00E+000.00E+000.00E+008.57E-010.00E+006.89E-010.00E+00
Average8.72E-010.00E+000.00E+003.67E-018.61E-015.15E-037.37E-012.11E-02
median8.72E-010.00E+000.00E+004.20E-018.62E-010.00E+007.36E-010.00E+00
Std1.14E-030.00E+000.00E+002.98E-011.59E-032.82E-022.34E-021.15E-01
RWMOP5
Best6.30E-015.77E-012.51E-012.76E-014.35E-012.77E-014.14E-012.77E-01
Worst6.29E-015.77E-012.48E-012.56E-014.27E-012.67E-013.18E-012.61E-01
Average6.30E-015.77E-012.49E-012.67E-014.33E-012.73E-013.96E-012.74E-01
median6.30E-015.77E-012.50E-012.68E-014.34E-012.73E-014.02E-012.74E-01
Std1.27E-042.59E-087.58E-046.32E-031.75E-032.60E-032.20E-022.70E-03
RWMOP6
Best3.20E-016.87E-023.07E-013.07E-012.77E-013.00E-012.72E-012.45E-01
Worst3.19E-016.87E-021.11E-010.00E+002.77E-010.00E+001.84E-011.99E-02
Average3.19E-016.87E-022.17E-011.15E-012.77E-011.69E-012.16E-011.47E-01
median3.19E-016.87E-022.26E-019.27E-022.77E-011.53E-012.02E-011.53E-01
Std5.63E-051.34E-055.89E-021.04E-013.27E-059.98E-022.90E-025.54E-02
RWMOP7
Best4.83E-014.81E-014.84E-014.82E-014.84E-014.83E-014.84E-014.82E-01
Worst4.70E-014.80E-014.83E-014.28E-014.84E-014.82E-014.80E-014.81E-01
Average4.81E-014.81E-014.83E-014.69E-014.84E-014.83E-014.82E-014.82E-01
median4.82E-014.81E-014.84E-014.73E-014.84E-014.83E-014.82E-014.82E-01
Std2.38E-034.38E-042.39E-041.39E-027.48E-052.38E-041.01E-033.09E-04
RWMOP8
Best2.64E-020.00E+002.28E-022.34E-022.60E-022.43E-022.57E-022.36E-02
Worst2.49E-020.00E+001.07E-021.79E-022.57E-022.24E-022.39E-022.22E-02
Average2.58E-020.00E+002.09E-022.18E-022.59E-022.35E-022.52E-022.27E-02
Median2.59E-020.00E+002.23E-022.20E-022.59E-022.35E-022.53E-022.26E-02
Std4.12E-040.00E+003.32E-031.46E-038.49E-054.53E-044.38E-042.94E-04
RWMOP9
Best4.08E-015.32E-023.39E-014.08E-014.09E-014.10E-013.84E-014.05E-01
Worst4.02E-015.30E-021.45E-014.04E-014.09E-014.09E-013.44E-013.95E-01
Average4.07E-015.31E-022.56E-014.07E-014.09E-014.09E-013.67E-014.01E-01
median4.07E-015.31E-022.59E-014.07E-014.09E-014.09E-013.67E-014.02E-01
Std1.44E-035.82E-055.06E-029.20E-041.73E-041.38E-047.50E-032.61E-03
RWMOP10
Best8.46E-018.01E-028.19E-018.47E-018.48E-018.44E-018.47E-018.42E-01
Worst8.42E-017.84E-021.86E-018.45E-018.47E-018.35E-018.06E-018.14E-01
Average8.45E-017.97E-026.25E-018.47E-018.47E-018.41E-018.27E-018.36E-01
median8.45E-018.00E-027.54E-018.47E-018.47E-018.41E-018.23E-018.38E-01
Std1.14E-035.76E-042.25E-013.80E-041.54E-042.41E-031.30E-027.01E-03
RWMOP11
Best9.58E-025.80E-021.08E-016.45E-029.69E-027.53E-029.96E-029.08E-02
Worst9.03E-025.62E-021.07E-010.00E+009.16E-023.52E-029.47E-027.37E-02
Average9.36E-025.74E-021.08E-011.23E-029.47E-026.03E-029.77E-028.32E-02
median9.41E-025.76E-021.08E-010.00E+009.49E-026.42E-029.81E-028.28E-02
Std1.60E-035.69E-042.54E-041.90E-021.26E-031.16E-021.35E-034.24E-03
RWMOP12
Best5.70E-010.00E+000.00E+005.47E-015.61E-015.55E-015.45E-010.00E+00
Worst5.62E-010.00E+000.00E+004.67E-015.59E-015.22E-015.07E-010.00E+00
Average5.69E-010.00E+000.00E+005.19E-015.60E-015.35E-015.32E-010.00E+00
median5.69E-010.00E+000.00E+005.31E-015.60E-015.34E-015.30E-010.00E+00
Std1.64E-030.00E+000.00E+002.71E-023.38E-048.61E-037.53E-030.00E+00
RWMOP13
Best9.86E-022.47E-028.09E-027.69E-029.00E-028.55E-029.00E-029.11E-02
Worst9.82E-022.46E-024.38E-030.00E+008.91E-020.00E+007.20E-025.45E-02
Average9.84E-022.46E-025.52E-022.23E-028.96E-023.57E-028.92E-027.94E-02
median9.83E-022.46E-025.78E-021.75E-028.96E-023.49E-028.98E-028.18E-02
Std1.29E-048.62E-061.80E-022.16E-022.18E-041.95E-023.25E-031.05E-02
RWMOP14
Best7.16E-011.29E-013.29E-015.93E-016.19E-013.52E-016.06E-015.88E-01
Worst7.15E-011.29E-017.00E-020.00E+006.14E-013.42E-015.80E-013.37E-01
Average7.16E-011.29E-011.44E-013.26E-016.18E-013.49E-015.98E-014.86E-01
Median7.16E-011.29E-011.16E-013.36E-016.18E-013.49E-016.00E-014.90E-01
Std2.69E-041.79E-078.06E-021.12E-019.97E-042.38E-036.11E-036.02E-02
RWMOP15
Best7.85E-017.53E-017.55E-017.55E-015.43E-017.50E-015.28E-010.00E+00
Worst7.84E-017.53E-010.00E+000.00E+005.42E-016.02E-013.57E-010.00E+00
Average7.84E-017.53E-012.66E-013.32E-015.43E-017.01E-014.54E-010.00E+00
Median7.84E-017.53E-013.57E-013.01E-015.43E-017.03E-014.53E-010.00E+00
Std2.05E-045.69E-092.10E-013.21E-012.47E-043.31E-024.47E-020.00E+00
RWMOP16
Best7.54E-010.00E+006.95E-017.56E-017.64E-017.63E-017.64E-015.75E-01
Worst7.43E-010.00E+008.94E-027.46E-017.63E-017.61E-017.54E-010.00E+00
Average7.49E-010.00E+004.46E-017.52E-017.64E-017.62E-017.61E-013.70E-01
median7.48E-010.00E+004.75E-017.52E-017.64E-017.62E-017.62E-014.15E-01
Std2.78E-030.00E+001.68E-012.63E-031.60E-043.57E-042.00E-031.50E-01
RWMOP17
Best5.48E-014.37E-013.37E+007.97E+072.73E-011.50E+122.88E-013.24E+00
Worst1.20E-010.00E+000.00E+000.00E+002.26E-015.48E-011.27E-010.00E+00
Average3.92E-017.24E-025.38E-014.11E+062.64E-011.87E+112.07E-016.23E-01
median4.61E-013.19E-044.21E-015.45E-012.68E-015.48E-012.09E-015.48E-01
Std1.38E-011.20E-015.77E-011.62E+071.04E-023.50E+113.28E-025.20E-01
RWMOP18
Best4.14E-024.03E-023.30E-024.04E-024.05E-024.05E-023.99E-024.04E-02
Worst4.12E-024.02E-022.36E-024.00E-024.05E-024.03E-023.68E-023.98E-02
Average4.13E-024.03E-022.94E-024.03E-024.05E-024.04E-023.85E-024.02E-02
median4.13E-024.03E-023.07E-024.03E-024.05E-024.04E-023.87E-024.02E-02
Std5.47E-051.44E-053.00E-038.72E-055.06E-064.83E-058.83E-041.47E-04
RWMOP19
Best6.63E-016.63E-016.63E-016.63E-013.71E-016.63E-013.13E-016.63E-01
Worst6.63E-016.63E-016.63E-015.89E-013.22E-016.63E-012.22E-016.63E-01
Average6.63E-016.63E-016.63E-016.57E-013.43E-016.63E-012.60E-016.63E-01
median6.63E-016.63E-016.63E-016.63E-013.45E-016.63E-012.59E-016.63E-01
Std3.40E-150.00E+001.19E-061.59E-029.52E-030.00E+002.26E-020.00E+00
RWMOP20
Best1.74E-011.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Worst1.71E-011.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Average1.73E-011.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
median1.74E-011.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Std4.41E-044.47E-153.28E-052.60E-050.00E+000.00E+000.00E+007.50E-07
RWMOP21
Best3.02E-022.93E-023.16E-023.17E-023.18E-023.18E-022.84E-023.17E-02
Worst2.85E-022.93E-022.99E-022.04E-023.17E-023.18E-022.41E-023.15E-02
Average2.95E-022.93E-023.10E-022.84E-023.18E-023.18E-022.48E-023.16E-02
median2.95E-022.93E-023.12E-022.91E-023.18E-023.18E-022.47E-023.16E-02
Std4.13E-042.94E-063.98E-042.35E-031.38E-068.07E-077.17E-045.90E-05
Table 9. HV results using the chemical engineering problems.
Table 9. HV results using the chemical engineering problems.
Algorithm
Problem
MOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
RWMOP22
Best1.00E+001.00E+001.00E+001.00E+001.30E+001.00E+002.17E+001.00E+00
Worst1.00E+001.00E+001.00E+001.00E+003.24E-011.00E+003.41E-011.00E+00
Average1.00E+001.00E+001.00E+001.00E+007.15E-011.00E+007.85E-011.00E+00
median1.00E+001.00E+001.00E+001.00E+007.07E-011.00E+007.34E-011.00E+00
Std4.12E-170.00E+003.67E-060.00E+001.80E-010.00E+003.05E-010.00E+00
RWMOP23
Best9.99E-019.99E-019.99E-019.99E-017.19E-019.99E-018.25E-019.99E-01
Worst9.99E-019.99E-019.86E-019.99E-019.73E-029.99E-019.09E-029.99E-01
Average9.99E-019.99E-019.97E-019.99E-013.44E-019.99E-013.11E-019.99E-01
median9.99E-019.99E-019.99E-019.99E-013.65E-019.99E-012.90E-019.99E-01
Std5.46E-158.59E-153.42E-034.52E-161.58E-014.52E-161.78E-014.52E-16
RWMOP24
Best1.00E+001.00E+001.00E+001.00E+007.47E+051.00E+001.00E+001.00E+00
Worst9.94E-011.00E+009.96E-011.00E+000.00E+001.00E+000.00E+001.00E+00
Average9.99E-011.00E+001.00E+001.00E+002.49E+041.00E+003.00E-011.00E+00
median1.00E+001.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Std1.73E-032.93E-097.36E-040.00E+001.36E+050.00E+004.66E-010.00E+00
Table 10. HV results using the process, design, and synthesis problems.
Table 10. HV results using the process, design, and synthesis problems.
Algorithm
Problem
MOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
RWMOP25
Best1.00E+004.02E-012.33E-012.34E-012.41E-012.36E-012.41E-012.31E-01
Worst9.99E-012.28E-012.26E-012.24E-012.41E-012.32E-012.40E-012.31E-01
Average9.99E-012.56E-012.29E-012.29E-012.41E-012.35E-012.41E-012.31E-01
median1.00E+002.33E-012.30E-012.29E-012.41E-012.35E-012.41E-012.31E-01
Std4.04E-045.83E-021.68E-032.39E-038.76E-051.02E-032.96E-041.67E-06
RWMOP26
Best8.49E-018.45E-016.98E-016.59E-012.00E-017.28E-012.05E-015.65E-01
Worst8.21E-018.45E-010.00E+000.00E+001.22E-010.00E+009.10E-020.00E+00
Average8.39E-018.45E-011.53E-012.29E-011.54E-012.78E-011.51E-011.51E-01
median8.40E-018.45E-017.54E-022.41E-011.49E-012.90E-011.50E-010.00E+00
Std6.59E-039.76E-082.03E-011.85E-012.29E-022.57E-014.21E-022.54E-01
RWMOP27
Best1.90E+001.93E+029.33E+131.19E+047.88E+117.62E+108.97E+121.22E+02
Worst1.43E+001.00E+001.00E+001.00E+001.99E+081.00E+006.02E+071.00E+00
Average1.50E+002.36E+014.48E+124.59E+021.00E+113.55E+093.17E+113.27E+01
median1.46E+001.86E+001.00E+001.45E+005.07E+098.96E+021.44E+091.12E+01
Std1.05E-014.38E+011.83E+132.17E+032.05E+111.41E+101.64E+123.92E+01
RWMOP28
Best1.00E+001.00E+001.00E+001.00E+005.01E-021.00E+004.99E-021.00E+00
Worst1.00E+001.00E+009.96E-010.00E+000.00E+001.00E+000.00E+001.00E+00
Average1.00E+001.00E+009.99E-019.56E-014.70E-031.00E+002.59E-031.00E+00
median1.00E+001.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Std1.75E-152.06E-179.68E-041.82E-011.20E-020.00E+009.57E-030.00E+00
RWMOP29
Best1.00E+001.00E+001.00E+001.00E+007.87E-011.00E+007.56E-011.00E+00
Worst9.93E-011.00E+001.00E+000.00E+005.59E-011.00E+007.89E-021.00E+00
Average9.97E-011.00E+001.00E+009.37E-017.68E-011.00E+006.29E-011.00E+00
median9.98E-011.00E+001.00E+001.00E+007.84E-011.00E+007.06E-011.00E+00
Std1.75E-036.25E-071.82E-101.93E-014.26E-021.15E-071.78E-012.51E-07
Table 11. HV results using power electronics problems.
Table 11. HV results using power electronics problems.
Algorithm
Problem
MOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
RWMOP30
Best5.41E-017.81E-017.36E-014.78E-016.82E-017.43E-016.54E-018.00E-01
Worst2.12E-012.83E-013.31E-010.00E+000.00E+000.00E+000.00E+001.08E-01
Average4.55E-016.99E-016.15E-012.21E-012.48E-014.53E-012.62E-016.43E-01
median4.90E-017.19E-016.43E-012.49E-010.00E+005.51E-012.59E-016.65E-01
Std9.89E-029.94E-029.80E-021.59E-012.77E-012.46E-012.66E-011.30E-01
RWMOP31
Best7.96E-019.18E-019.04E-018.69E-017.40E-018.94E-017.80E-019.08E-01
Worst7.16E-013.13E-014.14E-010.00E+000.00E+000.00E+000.00E+007.63E-01
Average7.58E-018.32E-018.32E-016.81E-011.33E-017.83E-012.22E-018.65E-01
median7.59E-018.78E-018.58E-017.08E-010.00E+008.42E-015.27E-028.70E-01
Std2.12E-021.27E-011.12E-011.73E-012.39E-011.84E-012.92E-013.29E-02
RWMOP32
Best7.26E-019.14E-018.95E-017.89E-018.45E-018.54E-017.89E-018.86E-01
Worst6.40E-013.96E-015.12E-010.00E+000.00E+000.00E+000.00E+004.65E-01
Average7.06E-018.50E-018.27E-015.28E-013.89E-017.05E-011.81E-018.12E-01
median7.15E-018.73E-018.40E-015.79E-015.14E-017.65E-010.00E+008.47E-01
Std2.26E-029.03E-026.58E-022.24E-013.56E-012.06E-013.12E-019.07E-02
RWMOP33
Best6.46E-019.12E-018.60E-017.39E-010.00E+008.62E-010.00E+008.74E-01
Worst4.94E-016.33E-014.30E-010.00E+000.00E+000.00E+000.00E+006.82E-01
Average6.03E-018.48E-018.02E-014.48E-010.00E+006.02E-010.00E+008.12E-01
median6.21E-018.72E-018.23E-015.27E-010.00E+007.47E-010.00E+008.36E-01
Std4.01E-025.80E-027.64E-022.43E-010.00E+003.03E-010.00E+005.26E-02
RWMOP34
Best7.52E-019.13E-018.82E-018.07E-010.00E+008.69E-010.00E+008.92E-01
Worst6.08E-014.62E-014.36E-012.23E-010.00E+000.00E+000.00E+007.49E-01
Average7.13E-018.47E-018.25E-015.65E-010.00E+007.08E-010.00E+008.36E-01
median7.47E-018.77E-018.42E-015.48E-010.00E+008.07E-010.00E+008.44E-01
Std4.69E-028.92E-027.68E-021.42E-010.00E+002.35E-010.00E+004.23E-02
RWMOP35
Best9.18E-019.76E-019.67E-019.33E-017.19E-019.61E-016.52E-019.71E-01
Worst8.99E-019.51E-019.32E-015.14E-010.00E+002.06E-010.00E+008.73E-01
Average9.17E-019.66E-019.51E-018.64E-011.62E-018.87E-011.91E-019.49E-01
median9.18E-019.66E-019.52E-018.89E-015.75E-029.39E-011.13E-019.52E-01
Std3.95E-035.45E-039.34E-037.97E-022.26E-011.60E-012.24E-011.75E-02
Table 12. HV results using the power system optimization problems.
Table 12. HV results using the power system optimization problems.
Algorithm
Problem
MOGSKMOEADeMOEAMOPSONSGAIISPEA2KnEAGrEA
RWMOP36
Best0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Worst0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Average0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
median0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
RWMOP37
Best0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Worst0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Average0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
median0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
RWMOP38
Best0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Worst0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Average0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
median0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
RWMOP39
Best0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Worst0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Average0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
median0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
RWMOP40
Best6.17E+018.35E-019.18E-018.63E+005.12E+002.02E+004.87E+008.26E-01
Worst3.14E-017.55E-018.08E-020.00E+000.00E+002.34E-010.00E+005.19E-01
Average1.54E+017.91E-016.27E-011.09E+001.25E+008.10E-011.33E+007.10E-01
median9.74E+007.93E-016.53E-013.96E-016.41E-028.10E-018.24E-027.08E-01
Std1.69E+011.98E-022.44E-012.29E+001.82E+003.01E-011.66E+006.98E-02
RWMOP41
Best9.89E-012.99E-014.89E+019.93E+000.00E+002.36E+010.00E+003.20E+01
Worst5.65E-020.00E+001.97E-010.00E+000.00E+000.00E+000.00E+004.80E-01
Average8.42E-011.07E-016.51E+003.41E-010.00E+001.66E+000.00E+003.23E+00
median9.55E-011.18E-018.62E-010.00E+000.00E+004.83E-010.00E+008.27E-01
Std2.26E-019.03E-021.35E+011.81E+000.00E+005.01E+000.00E+007.71E+00
RWMOP42
Best0.00E+009.83E-016.98E-010.00E+000.00E+006.93E-010.00E+009.21E-01
Worst0.00E+008.56E-010.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Average0.00E+009.39E-013.80E-020.00E+000.00E+002.31E-020.00E+001.98E-01
median0.00E+009.49E-010.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std0.00E+003.39E-021.48E-010.00E+000.00E+001.27E-010.00E+003.18E-01
RWMOP43
Best9.93E-011.00E+009.99E-010.00E+000.00E+001.00E+000.00E+001.00E+00
Worst4.63E-011.00E+009.85E-010.00E+000.00E+009.97E-010.00E+009.99E-01
Average8.67E-011.00E+009.95E-010.00E+000.00E+001.00E+000.00E+001.00E+00
Median8.95E-011.00E+009.96E-010.00E+000.00E+001.00E+000.00E+001.00E+00
Std1.22E-019.19E-053.26E-030.00E+000.00E+005.10E-040.00E+001.68E-04
RWMOP44
Best1.00E+009.98E-019.67E-010.00E+000.00E+009.06E-010.00E+009.40E-01
Worst0.00E+009.29E-012.90E-010.00E+000.00E+000.00E+000.00E+000.00E+00
Average9.17E-019.79E-016.66E-010.00E+000.00E+004.73E-010.00E+005.05E-01
median1.00E+009.81E-016.55E-010.00E+000.00E+005.14E-010.00E+005.66E-01
Std2.36E-011.52E-021.42E-010.00E+000.00E+002.69E-010.00E+002.68E-01
RWMOP45
Best1.00E+001.00E+009.99E-010.00E+000.00E+009.99E-010.00E+001.00E+00
Worst1.15E-011.00E+009.30E-010.00E+000.00E+000.00E+000.00E+008.92E-01
Average9.03E-011.00E+009.81E-010.00E+000.00E+008.24E-010.00E+009.84E-01
median9.80E-011.00E+009.87E-010.00E+000.00E+009.44E-010.00E+009.96E-01
Std1.98E-011.77E-051.73E-020.00E+000.00E+002.94E-010.00E+002.71E-02
RWMOP46
Best9.99E-011.00E+009.88E-010.00E+000.00E+006.79E-010.00E+001.00E+00
Worst1.13E-017.71E-010.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Average8.75E-019.76E-015.72E-010.00E+000.00E+001.91E-010.00E+006.21E-01
median9.97E-019.93E-016.69E-010.00E+000.00E+000.00E+000.00E+007.51E-01
Std2.54E-014.80E-023.16E-010.00E+000.00E+002.45E-010.00E+003.37E-01
RWMOP47
Best9.01E-011.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Worst1.72E-011.00E+001.00E+000.00E+000.00E+001.00E+000.00E+001.00E+00
Average2.05E-011.00E+001.00E+009.67E-010.00E+001.00E+000.00E+001.00E+00
median1.74E-011.00E+001.00E+001.00E+000.00E+001.00E+000.00E+001.00E+00
Std1.33E-010.00E+000.00E+001.83E-010.00E+000.00E+000.00E+000.00E+00
RWMOP48
Best9.87E-011.00E+001.00E+009.87E-010.00E+001.00E+000.00E+001.00E+00
Worst7.31E-011.00E+007.75E-010.00E+000.00E+001.00E+000.00E+001.00E+00
Average9.27E-011.00E+009.92E-012.63E-010.00E+001.00E+000.00E+001.00E+00
median9.43E-011.00E+001.00E+000.00E+000.00E+001.00E+000.00E+001.00E+00
Std6.19E-023.46E-054.12E-023.80E-010.00E+007.51E-060.00E+002.05E-05
RWMOP49
Best1.52E-010.00E+000.00E+000.00E+000.00E+007.15E-010.00E+000.00E+00
Worst0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Average9.55E-020.00E+000.00E+000.00E+000.00E+002.38E-020.00E+000.00E+00
Median9.99E-020.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std3.34E-020.00E+000.00E+000.00E+000.00E+001.31E-010.00E+000.00E+00
RWMOP50
Best6.11E-016.11E-016.11E-016.11E-011.38E-026.11E-011.57E-026.11E-01
Worst6.11E-016.11E-016.11E-016.07E-019.59E-036.07E-015.75E-036.08E-01
Average6.11E-016.11E-016.11E-016.11E-011.18E-026.09E-011.18E-026.09E-01
Median6.11E-016.11E-016.11E-016.11E-011.18E-026.09E-011.22E-026.09E-01
Std5.47E-071.70E-101.13E-167.53E-049.39E-049.61E-041.64E-036.27E-04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chalabi, N.E.; Attia, A.; Alnowibet, K.A.; Zawbaa, H.M.; Masri, H.; Mohamed, A.W. A Multi–Objective Gaining–Sharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems. Mathematics 2023, 11, 3092. https://doi.org/10.3390/math11143092

AMA Style

Chalabi NE, Attia A, Alnowibet KA, Zawbaa HM, Masri H, Mohamed AW. A Multi–Objective Gaining–Sharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems. Mathematics. 2023; 11(14):3092. https://doi.org/10.3390/math11143092

Chicago/Turabian Style

Chalabi, Nour Elhouda, Abdelouahab Attia, Khalid Abdulaziz Alnowibet, Hossam M. Zawbaa, Hatem Masri, and Ali Wagdy Mohamed. 2023. "A Multi–Objective Gaining–Sharing Knowledge-Based Optimization Algorithm for Solving Engineering Problems" Mathematics 11, no. 14: 3092. https://doi.org/10.3390/math11143092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop