Next Article in Journal
The Role of Soil Structure Interaction on the Seismic Resilience of Isolated Structures
Next Article in Special Issue
A Novel Mixed-Attribute Fusion-Based Naive Bayesian Classifier
Previous Article in Journal
Quality Prediction and Parameter Optimisation of Resistance Spot Welding Using Machine Learning
Previous Article in Special Issue
Combining UNet 3+ and Transformer for Left Ventricle Segmentation via Signed Distance and Focal Loss
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Distributed Bi-Behaviors Crow Search Algorithm for Dynamic Multi-Objective Optimization and Many-Objective Optimization Problems

1
University of Sousse, ISITCom, Sousse 4011, Tunisia
2
REGIM Lab: REsearch Groups in Intelligent Machines, University of Sfax, National Engineering School of Sfax (ENIS), BP 1173, Sfax 3038, Tunisia
3
High Institute of Applied Science and Technology of Sousse, University of Sousse, Sousse 4003, Tunisia
4
College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
5
Yonsei Frontier Lab, Yonsei University, Seoul 03722, Korea
6
Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia, Brisbane 2607, Australia
7
Department of Electrical and Electronic Engineering Science, Faculty of Engineering and the Built Environment, University of Johannesburg, Johannesburg 2006, South Africa
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9627; https://doi.org/10.3390/app12199627
Submission received: 4 August 2022 / Revised: 15 September 2022 / Accepted: 17 September 2022 / Published: 25 September 2022
(This article belongs to the Special Issue Recent Advances in Machine Learning and Computational Intelligence)

Abstract

:
Dynamic Multi-Objective Optimization Problems (DMOPs) and Many-Objective Optimization Problems (MaOPs) are two classes of the optimization field that have potential applications in engineering. Modified Multi-Objective Evolutionary Algorithms hybrid approaches seem to be suitable to effectively deal with such problems. However, the standard Crow Search Algorithm has not been considered for either DMOPs or MaOPs to date. This paper proposes a Distributed Bi-behaviors Crow Search Algorithm (DB-CSA) with two different mechanisms, one corresponding to the search behavior and another to the exploitative behavior with a dynamic switch mechanism. The bi-behaviors CSA chasing profile is defined based on a large Gaussian-like Beta-1 function, which ensures diversity enhancement, while the narrow Gaussian Beta-2 function is used to improve the solution tuning and convergence behavior. Two variants of the proposed DB-CSA approach are developed: the first variant is used to solve a set of MaOPs with 2, 3, 5, 7, 8, 10,15 objectives, and the second aims to solve several types of DMOPs with different time-varying Pareto optimal sets and a Pareto optimal front. The second variant of DB-CSA algorithm (DB-CSA-II) is proposed to solve DMOPs, including a dynamic optimization process to effectively detect and react to the dynamic change. The Inverted General Distance, the Mean Inverted General Distance and the Hypervolume Difference are the main measurement metrics used to compare the DB-CSA approach to the state-of-the-art MOEAs. The Taguchi method has been used to manage the meta-parameters of the DB-CSA algorithm. All quantitative results are analyzed using the non-parametric Wilcoxon signed rank test with 0.05 significance level, which validated the efficiency of the proposed method for solving 44 test beds (21 DMOPs and 23 MaOPS).

1. Introduction

During the last decade, a wide range of meta-heuristics have been designed to solve many complex problems based on Evolutionary Algorithms (EA), such as the Genetic Algorithm (GA) [1], and Swarm Intelligence (SI) approaches, such as the Particle Swarm Optimization (PSO) algorithm [2,3,4,5]. Different Multi-Objective Evolutionary Algorithms (MOEAs) have been employed to solve static Single Objective Optimization Problems (SOPs) and static Multi-Objective Optimization Problems (MOPs), where the main challenge is to find the best solution for SOP and a set of optimal solutions when solving MOP to balance the convergence and diversity in the search space. However, this process becomes more challenging when solving Dynamic Multi-Objective Optimization Problems (DMOPs), characterized by several types of time-varying Pareto Optimal Sets (POSs) and Pareto Optimal Fronts (POFs) [6]. Equations (1) and (2) investigate the main differences between the static Multi-Objective Optimization Problem (MOP), static Many-Objective Optimization Problem (MaOP) and dynamic Multi-Objective Optimization Problem (DMOP). First, the common characteristics are as follows:
  • Each problem (MOP, MaOP, DMOP) has a fitness function F(X), which can be minimized or maximized with simultaneously performed M objectives.
  • Each problem has a set of solutions, X, presented by a set of bounded decision variables, X, with a d-dimensional search space generated between the minimum (X m i n ) and the maximum (X m a x ) boundaries.
  • The search space of each problem can be limited by a set of inequality and equality constraint.
F ( X ) = { F ( X ) = ( f 1 ( X ) , , f M ( X ) ) Subject to : g ( X i ) 0 or g ( X i ) 0 , h ( X i ) = 0 i = 1 , , d , x [ X m i n , X m a x ] 1 < M 3
where F(X) is the fitness function, M is the number of objectives, and g(X) and h(X) are the inequality and the equality constraints, respectively.
The main difference between the static MOP, MaOP and DMOP is investigated by looking at the number of objectives, nature of decision space, objective space, and constraints.
  • For the MOP and the DMOP, the number of objectives (M) is fixed to 2 or 3 1 < M 3 and M is upper than 3 for MaOP M > 3 .
  • However, the DMOP has a time-varying decision variables, objectives and/or constraints, and is presented in Equation (2).
  • The DMOP has M dynamic objective functions F(X, t) and a set of time-varying variables X ( t ) = X 1 ( t ) , , X n ( t ) , which are subject to different bounded constraints, limiting the search space.
F ( X , t ) = { F ( X , t ) = ( f 1 ( X , t ) , f 2 ( X , t ) , , f M ( X , t ) ) Subject to : g i ( X , t ) 0 and 0 h j ( X , t ) = 0 i = 1 , , d g ( t ) and , j = 1 , , d h ( t ) ] X [ X m i n , X m a x ] , t [ t b e g i n , t e n d ] X Ω X , t Ω t
where M is the number of conflicting objective functions, d g ( t ) and d h ( t ) are the number of inequality and quality constraints at time t, respectively, and X is a set of bounded decision variables with a d-dimensional search space generated between minimum boundary (X m i n ) and maximum boundary (X m a x ). F(X,t) is the objective vector that optimizes solution X at time t. Ω X R n is the decision space, and Ω t R is the time space, bounded between the starting time t b e g i n and the ending time t e n d . The objective vector is denoted by F ( X , t ) : Ω X × Ω t R M t , presenting the resulting values for each solution X at time t.
Generally speaking, Dynamic Multi-Objective Evolutionary Algorithms (DMOEAs) are designed to effectively detect and react to the changes that may affect the POS and POF, while conserving both convergence and diversity concepts [7,8]. However, Evolutionary Dynamic Optimization (EDO) approaches include an explicit or implicit mechanisms to detect and correctly react to the dynamic change. A change detection mechanism can be maintained through detectors using a feasible search population, such as the current best solutions, memory of optimal solutions or some predefined sub-population. Furthermore, this can be assumed separately to the search space using a set of random selected solutions, a fixed point, a regular grid of solutions or a set of determined points. In addition, the algorithm behaviors have considered a robust detection strategy based on the average of best-found solutions, time-varying observation of different sub-swarms, diversity of the solutions compared to the success rate, time-varying distributions and statistical methods. Increasing the mutation rate (hyper-mutation) or adding a randomly new member and relocating some useful solutions are the main mechanisms used to manage the loss of diversity in a dynamic search space which may fall within undetected regions of potential solutions.
The efficiency of standard MOEAs significantly decreases when dealing with Many-Objective Optimization Problems (MaOPs) where the number of objectives that need to be satisfied is higher than 3. Furthermore, three main issues are introduced when solving MaOPs, including: (i) the inutility of the dominance operator when dealing with a large number of objectives, (ii) the loss of diversity and premature convergence, and (iii) the exponentially increase of the population size. The Crow Search Algorithm (CSA) [9] is a meta-heuristic that simulates the social behavior of crows for food-searching. Crows are characterized by their ability to memorize food sources, as well as sources that other member of the flock may hold or hide. The CSA algorithm was first proposed as a mono-objective optimization technique and then extended to solve static Multi-Objective Problem (MOP) and constrained engineering optimization problems, in which the algorithm showed a relative effectiveness in comparison with techniques such as the harmony search (HS) [10], GA [1] and PSO approach. The main contribution is as follows:
  • This contribution presents a novel Distributed Bi-behaviours Crow Search Algorithm (DB-CSA) to solve both Dynamic Multi-Objective Optimization Problems (DMOPs) and static Many-Objective Optimization Problems (MaOPs), which have not yet been performed using the standard CSA algorithm.
  • The main difference between the original CSA algorithm and the new DB-CSA algorithm is as follows: the proposed DB-CSA approach presents two new chasing profiles, denoted by Beta Distribution profiles over the large Gaussian Beta-1 function for diversity enhancement and the narrow Gaussian Beta-2 function for convergence improvements.
  • The proposed approach tends to achieve a dynamic balance between exploitation and exploration at each iteration during the optimization process, which makes it more suitable for both dynamic multi-objective optimization and many-objective optimization.
  • A dynamic optimization mechanism is considered within the proposed DB-CSA algorithm when solving DMOPs to detect the time-varying POS and POF, effectively reacts to the dynamic changes and is denoted by the second variant (DB-CSA-II). This process aims to manage and control the dynamic change in a time-varying search space.
The reminder of this manuscript is organized as follows: Section 2 presents an overview of the best-known Dynamic Multi-Objective Optimization methods, Many-Objective Optimization Approaches and some existing Crow Search Algorithms based-methods. Section 3 presents the proposed Distributed Bi-behaviours Crow Search Algorithm (DB-CSA). Section 4 details the experimental evaluation, which is based on two comparative studies: one for DMOPs and the second for MaOPs. The results are presented in terms of their mean and standard deviation. Then, a statistical comparison between the proposed DB-CSA algorithm and the state-of-art methods is carried out using the non-parametric Wilcoxon signed rank test. Finally, Section 5 concludes this paper and presents some future work.

2. State-of-the-Art on Evolutionary Multi-Objective Optimization

This section presents a set of comparable DMOEAs and MaOEAs, designed for both Dynamic Multi-Objective Optimization and Many-Objective Optimization, presented in Section 2.1 and Section 2.2, respectively. In addition, the crow search-based methods are given in Section 2.3.

2.1. Dynamic Multi-Objective Optimization Methods

Several Dynamic Multi-Objective Evolutionary Algorithms (DMOEAs) have been designed in the literature to solve DMOPs with time-varying objective, variables or constraints. A set of these are visible in Table 1. Five groups of DMOEAs are available in the literature to solve DMOPs: diversity-based techniques, memory-based approaches, prediction methods, parallel systems and transfer learning-based algorithms. The diversity-based approach [1] has shown the ability to solve dynamic problems with continuous and small time-varying parameters, and showed its limit when faced a severe environmental changes. Furthermore, the DMOPs have presented some periodical or recurrent changes, making storing the historical experience of solutions useful in preserving diversity. The dynamic non-dominated sorting genetic algorithm II (DNSGA-II) [1] is proposed to enhance the diversity of solutions when solving DMOPs. In DNSGA-II, a set of solutions was randomly selected for use as detectors and re-evaluated after each change. Then, if a change was detected, all selected solutions were re-initialized or hyper-mutated.
Memory-based approaches use redundant representations of an evolutionary algorithm, using extra-memory components to detect future changes [11]. These approaches are very effective to solve DMOPs with periodically time-varying properties. However, such mechanisms slow the convergence and strengthen diversity in the EDO approaches. The main disadvantage of memory-based algorithms is the ineffectiveness of the redundant solutions stored in the archive. However, prediction-based methods tend to predict changes based-on limited patterns. Such a system can quickly detect the best global solution, but fail when the changes are stochastic, which increases the relative training error rates. The Steady-State and Generational Evolutionary Algorithm (SGEA) [11] was designed to effectively detect and react to the change in a steady-state manner. If a change is detected, a number of good solutions are re-used in the next processing step; then, a combination of previous and new solutions are used to approximate the new Pareto optimal front.
The parallel approaches present an optimization process over multiple sub-swarms, which can handle the problem over a separate search space and are recommended for multi-modal problems, while being computationally expensive. A key challenge for these methods is finding an appropriate number of sub-swarms and their sizes. The Competitive-Cooperative Co-evolutionary Algorithm (dCOEA) in [12] aims to track the time-varying POF based on the decomposition of the optimization process. However, only the winners of each sub-population are used to manage the optimal solutions. The MOEA/D [13] is a decomposition-based approach, which aims to subdivide the population into several sub-populations and solve many sub-problems separately and simultaneously, making the MOEA/D system lower and more time-consuming.
The prediction-based methods were developed based on machine-learning algorithms and can efficiently determine and optimize the initial population based on previous experience. However, the main limit is the insufficiency of useful knowledge at the beginning of the optimization process, and they are time-consuming. The population prediction strategy (PPS) [14] is a prediction-based method, which divides the non-dominated solutions into a centerpoint and a manifold; then, both are used to predict the future center point and manifold, respectively. When a change is detected based a population, re-initialization is operated. Transfer-learning-based techniques are reliable alternatives for DMOPs, based on the use of MOEA/D [13] as a baseline system. In 2020, the new memory-driven manifold transfer learning was proposed, based on the evolutionary algorithm (MMTL-MOEA/D) [15]. This approach combined the memory mechanism to preserve the previous best solutions and the manifold transfer learning feature to estimate the best solutions, so that the best solutions are conserved and set as the initial population of the next generation.
Table 1. Classification of the MOEAs for DMOPs.
Table 1. Classification of the MOEAs for DMOPs.
Diversity-Based ApproachesDNSGA-II [1]
Memory-Based ApproachesSGEA [11]
Prediction-Based methodsPPS [14]
Parallel ApproachesMOEA/D [13]
dCOEA [12]
Transfer Learning-Based MethodsMMTL-MOEA/D [15]
RI-MOEA/D [15]
SVR-MOEA/D [16]
Tr-MOEA/D [17]
KF-MOEA/D [18]
The random re-initialization mechanism in (RI-MOEA/D) [15] selects 10% of the selected populations after each change to maintain the diversity. A combination of PPS [14] and the MOEA/D is considered in the PPS-MOEA/D algorithm to solve the DMOP. The support vector regression (SVR) based on evolutionary algorithm (SVR-MOEA/D) is proposed in [16], designed to solve the nonlinear correlation between two historical optimization processes. The SVR is used to predict a new population after each change in the search space. A transfer-learning-based dynamic multi-objective evolutionary algorithm (Tr-MOEA/D) is proposed in [17], aiming to solve the issue of non-independent and identically distributed data in a dynamic environment. The Tr-MOEA/D system implements a transfer learning mechanism to reuse the previous historical population after each change, which speeds up the optimization process. In the KF-MOEA/D [18] system, a Kalman filter (KF) is used to predict a new population prior to performing the convergence concept. The transfer-learning-based method adapts a set of machine-learning techniques to improve the performance of heuristics when solving DMOPs. The transfer-learning-based method aims to re-use previous computational experience to improve the efficiency of the newly generated populations after each detected change. However, this category of methods presents a major limit in the parameter-tuning procedure, which is a time-consuming process and requires trial and error [17,18].

2.2. Many-Objective Optimization Methods

Many-objective optimization algorithms are designed to manage the issues related to the exploitation and exploration concepts to preserve convergence and diversity in the search space. Table 2, presents several Many-Objective Evolutionary Algorithms (MaOEAs) and is classified into five classes, namely, decomposition-based methods, indicator-based approaches, diversity-based selection criterion, modified dominance relation-based approaches and preference-based approaches. Many Pareto-based approaches show their limits to rank and determine the set non-dominated solutions using the dominance operator since a high number of solutions have a large number of objectives leading to the poor convergence implied by the Active Diversity Promotion (ADP) phenomenon [19]. As a solution, a variety of enhancements have been adopted in the original MOEAs when solving MaOPs, including the decomposition-based and indicator-based approaches. Decomposition mechanisms combine multiple objectives into a single problem or a series of sub-problems. Some popular techniques of this type are Pareto sampling [20], improved Pareto sampling (MSOPS-II) [21] and Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) [13]. The decomposition-based approach becomes more effective with a set of sub-MOPs, such as those presented in the Reference Vector-Guided Evolutionary Algorithm (RVEA) [22], MOEA/D-M2M [23], NSGA-III [24], MOEA/DD [25] and the MOEA/D-ROD [26]. In addition, a set of performance metrics are considered to guide the optimization process over different indicator-based approaches, such as the fast hypervolume-based evolutionary algorithm (HypE) [27], S-metric-selection-based evolutionary multi-objective algorithm (SMS-EMOA) [28], indicator-based evolutionary algorithm (IBEA) [29], and Evolutionary Many-Objective Optimization Algorithm based on an IGD Indicator with Region Decomposition [30] and MaOEA/IGD [31].
A set of dominance operators are proposed to deal with the issue of the ineffectiveness of sorting non-dominated solutions using the dominance operator proposed in the Pareto-based methods.The most know novel dominance operators as the following; L-optimality [32], ε -dominance [33], fuzzy dominance [34], the Grid-based Evolutionary Algorithm (GrEA) [35], θ Dominance-based Evolutionary Algorithm ( θ -DEA) [36] and the preference order ranking  procedure [37]. Diversity management techniques are proposed to arrange a good balance between the convergence and the diversity when solving MaOPs. In [35], a three grid-based criterion was proposed to maintain diversity, including the grid crowding distance, the grid coordinate point distance and the grid ranking. A diversity promotion mechanism, DM, is introduced in [38] to activate or disactivate the diversity of the population based on the spread and the crowding distance of solutions. In NSGA-III algorithm [24], the reference point-based strategy is used to solve MaOPs. The shift-based density estimation (SDE) strategy [39] was utilized to replace the dominance operators of MOEAs. The knee point-driven evolutionary algorithm (KnEA) [40] was developed using both knee point-based selection and dominance-based selection. Three groups of preference-based approaches, including prior algorithms, interactive algorithms and posterior algorithms, were employed to deal with the issue of population size limitations regarding the large dimension of the objective space. The best-known posterior approaches are the Preference-Inspired Co-evolutionary Algorithms (PICEA-g) [41], the novel, two-archive algorithm (TAA) [42], and its improved version (Two_Arch2) [43].
In addition, the Particle Swarm Optimization (PSO) algorithm received a great attention in MaOPs. The Control Dominance Area of Solutions (CDAS) [44] has been used with SMPSO and SigmaMOPSO for MaOPs. Indicator-based PSO systems were proposed to maintain the leader’s selection using the R2 indicator, as presented in H-MOPSO [45] or the hypervolume metric in S-MOPSO [46]. A two-stage strategy and a parallel cell coordinate system were adopted in MaOPSO/2s-pccs [47]. A preference-based method was proposed, using a PSO system, focusing on solutions around the knee point and called knee-driven particle swarm optimization (KnPSO) [48]. In [49], the MaPSO method uses the leader’s selection from a certain number of historical solutions using scalar projection. In addition, the HGLSS-MOPSO algorithm [50] adopted the Hybrid Global Leader Selection (HGLSS) using two global leader selection mechanisms: the first for exploration and the second for exploitation. A recently published paper [51] presented an adaptive localized decision variable analysis approach under the decomposition-based framework to solve the Large-Scale Multi-Objective Optimization problems and Multi-Tasking Optimization Problems in MaOPs. As a conclusion, all the abovementioned Many-Objective Evolutionary Algorithms (MaOEAs) are presented as highly complex and time-consuming systems, especially when using decomposition-based mechanisms and/or quality indicators to separately deal with convergence and diversity.
The high number of objectives is managed through a decomposition-based mechanism to obtain a set of single or multi-objectives problems. The multiple single-objective Pareto sampling (MSOPS) [20] algorithm generates a set of target vectors. Then, it undergoes multiple single-objective optimization processes to solve MaOP. Such a strategy may return moderated results, since it does not correctly manage the multi-objectives of MaOPs. The enhanced MSOPS-II [19] uses a set of target vectors based on the current population to guide the optimization process at each iteration. Then, the aggregation of fitness functions is used to evaluate the performances of the proposed solutions. The MOEA/D [13] algorithm proceeds with a decomposition of the many-objectives into a set of single objectives using a uniformly distributed weight vector. Similar to the weight vectors of the MOEA/D algorithm, the NSGA-III [24] used a number of well-spread reference points to approximate non-dominated solutions; then, it enhances the diversity of the population. Based on the main idea of the NSGA-II, the reference vector-guided evolutionary algorithm (RVEA) [22] adopted two reference vectors: one for selection and the second for adaptation. In the RVEA system, the concept of convergence and diversity are dynamically managed using the Angle Penalized Distance (APD).
A vector angle-based evolutionary algorithm (VaEA) [52] is proposed for Unconstrained MaOPs. The VaEA algorithm uses the maximum-vector-angle as a selection mechanism to guarantee a good distribution to approximate to the true Pareto front, while the worse solutions are replaced with newly generated random solutions. The θ -DEA [36] system is based on NSGA-III with a new, θ -non-dominated concept, which is different to the original dominance operator, as used on the Pareto-based methods. It employs a set of reference points to cluster the solutions set and enhance the exploration phase. The NSGA-II/SDR is a modified version of the NSGA-II, with a Strengthened Dominance Relation (SDR), presented in [53] to solve MaOP. The NSGA-II/SDR adopts the angle and the niching mechanism to select the best converged solutions. MOEA/DD, MOEA dominance and decomposition [25] form a hybridization of the MOEA/D [13] and the NSGA-III [24], where the multiple objectives are decomposed into sub-problems and a dominance criterion is used to aggregate the global solution. Different grid-based criteria, such as the grid crowding distance (GCD), the grid ranking (GR) and the grid coordinate point distance (GCPD) are integrated in MOEAs to evaluate the fitness function of the MaOP. In addition, the GrEA system [35], is designed to maintain a good balance between convergence and diversity over both the grid dominance and grid difference to evaluate the fitness function and push the system toward the most optimal solutions. Two variants of the Pareto-based evolutionary algorithm using the penalty mechanism (PMEA) are presented in [54]: the MPEA-MP and the MPEA*-MA. The PMEA-MA is developed using the Manhattan-distance and the cosine distance as the convergence and distribution metrics; it includes a population-prepossessing to enhance the diversity. The second variant, PMEA*-MA, is a simplified one, which does not adopt the prepossessing step.
The Angle-based Selection algorithm (AnD) [55] is a non-Pareto-based method that aims to maintain the diversity of the population using an angle-based selection technique, then picks optimized members in the same search direction as a sorting solution. A hybridization of the Strength Pareto Evolutionary Algorithm (SPEA) and the shift-based density estimation (SDE) strategy in [39] is denoted by (SPEA/SDE). This estimates the density of the population; then, individuals who do not converge are eliminated to enhance diversity among the divergent solutions. In [56], the SPEAR leverages a reference direction-based density estimator using the standard SPEA algorithm for multi/many-objective optimization problems. The knee point-driven evolutionary algorithm (KnEA), proposed in [40], evolves a population and selects non-dominated solutions based on knee point criterion, which may be assumed to be a Pareto strategy. Furthermore, the two-stage evolutionary algorithm (TSEA) is developed in [57]. In the first stage, several sub-populations are optimized to converge to different regions of the Pareto front; then, the non-dominated solutions of each sub-population are considered as individuals that could be optimized in the second stage.
In indicator-based methods, several quality metrics are used to perform the optimization process; for example, the Monto Carlo simulation is used in the HypE algorithm [27] to minimize the computation cost and approximate the results. The preference-based approaches use different adaptation mechanisms like an external memory for convergence and diversity preservation as well as to make a decision regarding the true Pareto front. In [41], the PICEA-g algorithm integrates the co-evolution as a posterior adaptation mechanism with a set of candidate solutions to help with decision-making and approximate the entire of the POF. Two archives are used in the Two_Arch2 [43] system: the first is considered for convergence (CA) and the second is used to maintain diversity (DA). A crossover operator is used as a selector mechanism between the CA and DA and a mutation operator is used in CA memory.
Table 2. Classification of the MOEAs for MaOPs.
Table 2. Classification of the MOEAs for MaOPs.
Decomposition-based approachesMSOPS [20] and MSOPS-II [19]
MOEA/D [13]
MOEA/DD [25]
TSEA [57]
MPEA-MP and MPEA*-MA [54]
Indicator-based approachesHypE [27]
Diversity-based selection criterionNSGA-III [24]
SPEA/SDE [39]
KnEA [40]
SPEAR [56]
Modified dominance relation-based approachesGrEA [35]
VaEA [52]
θ -DEA [36]
NSGA-II/SDR [53]
AnD[55]
Preference-based approachesRVEA [22]
PICEA-g[41]
Two_Arch2 [43]

2.3. Existing Crow Search-Based Methods

The Crow Search Algorithm (CSA) [9] was first proposed in 2016 to solve constrained engineering optimization problems. In [58], Meriahi et al. published a new overview paper to present all modified versions of the CSA algorithm. A set of novel CSA algorithms are extended for solving MOPs. A Multi-Objective Crow Search Algorithm (MOCSA) is proposed in [59], in which chaos and orthogonal opposition-based operators are used to hybridize CSA, (M2O-CSA) with a focus on solving MOPs. Also, a hybridization of the CSA algorithm with a clustering model is published and denoted by the Multi-objective Taylor Crow Optimization algorithm (MOTCO) considered for solving the clustering-aware wireless sensor network [60]. Furthermore, two binary versions of the CSA algorithm are proposed in [61,62]. The first one is the BCSA [61], which is used as a V-shaped transfer function to obtain a binary representation of continuous data with applications in feature selection. The second binary CSA algorithm [62] consists of applying a sigmoid transformation for solving the 2D bin packing problem. Several modified versions of the CSA algorithm tend to manage the loss of diversity in the search space based on the Gaussian distribution for diversity enhancement as in [63]. A priority-based technique is used to determine the sufficient flight length for each crow and update their position based on a followed crow. This technique is considered for electromagnetic optimization of the usability factors’ hierarchical model in a prediction model [64] as well as for the economic load dispatch problem [65]. Also, a modified crow search algorithm is proposed for solving a power system problems and aims to modify the standard CSA parameters such as the awareness probability and the random perturbation by a dynamic awareness probability (DAP) and a Lévy flights to simulate the evasion behavior of each crow  [66]. Furthermore, Huang et al. [67] proposed a hybrid CSA algorithm (HCSA) based on the Variable Neighborhood Search (VNS) and the standard SCA method. The HCSA algorithm is proposed for dealing with an NP-hard combinatorial optimization problem, such as the permutation flow shop scheduling problem (PFSP), aiming to retrieve an actionable permutation order and handle a large number of jobs. An Improved Crow Search Algorithm was proposed by Primitivo et al. [68] , to solve complex energy problems. The authors aim to modify the awareness probability (AP) and the random perturbation of the standard CSA algorithm with a new dynamic awareness probability (DAP) and Lévy flight distribution to balance exploration and exploitation in the search space. The robustness of the CSA algorithm is proven when solving different complex problems. Meddeb et al. [69] proposed a novel meta-heuristic approach based on the crow search algorithm (CSA) to solve the optimal reactive power dispatch (ORPD) problem.
A set of mechanisms has been used to improve the CSA algorithm, including a search bounds limits management strategy [70], adding an archive component [71], and restructuring awareness probabilities [72] to enhance the random perturbation and the dynamic probability of the CSA system. Several operators were added to achieve a good balance between convergence and diversity, such as the Roulette wheel selection tool and the inertia weight, the Levy flight and the adaptive adjustment factors. In addition, a crossover and a mutation operator were proposed to intrinsically hybridize CSA in [73], with applications in hybrid renewable energy PV/wind/battery system. Many hybridization methods have been developed to combine the CSA algorithm with the Grey Wolf Optimizer (GWO) [74], the Cat Swarm Optimization (CSO), the Crow PSO [75] and the Crow Search Mating-based Lion Algorithm [76].

3. The Proposed Distributed Bi-Behaviors Crow Search Algorithm

Different MOEAs were designed to solve the DMOP, and should be able to detect and respond to problem pattern changes. However, many modified evolutionary approaches have been designed for MaOP to deal with a high number of objective functions. The state-of-the-art optimization approaches designed for DMOPs and MaOPs are characterized by their complexity in terms of time and resources. This work proposes a new Distributed Bi-behaviours Crow Search Algorithm (DB-CSA) to dynamically manage both convergence and diversity concepts when solving both DMOPs and MaOPs. The new DB-CSA is classified as a diversity-based approach, combining the simplicity of the CSA algorithm and the flexibility of the Beta function [77] to produce several forms and configurations of distributions, including the normal Gaussian one. More details about the proposed DB-CSA are presented in the next subsection.

3.1. The Standard Crow Search Algorithm

The Crow Search Algorithm (CSA) was proposed by Askarzadeh in 2016 [9] as a meta-heuristic to solve constrained engineering optimization problems. Crows are known to be a social bird with the ability to memorize and use food source positions when needed; these sources may be the result of a personal search or from the crows’ social activities. The CSA algorithm mimics the crow flock’s search mechanisms and uses them for optimization purposes. The search process is detailed in Algorithm 1, and begins with a random initialization of N crow’s positions with d dimensional search space.
Algorithm 1 The Standard Crow Search Algorithm (CSA)
  • Begin
  • Randomly initialize the position (X)of N crows
  • Evaluate the position of the crows
  • Initialize the memory of each crow
  • While iteration < Max-Iteration
  •         For i = 1 to i ≤ N  do
  •                 Randomly choose one (i) crow to follow
  •                 Define the awareness probability A P i ( t )
  •                 Define the flight length F l i ( t )
  •                 Update the crow position using Equation (1)
  •         End For
  • Check the feasibility of new positions
  • Evaluate the new position of the crows
  • Update the memory (Mem) of crows using Equation (2)
  • End While
  • End
Each crow i is characterized by a position vector X i , defined by: X i = X 1 i , X 2 i , , X d i and their memory position Mem i used to achieve the best food positions. All crows are flying in the search space and at each iteration we aim to optimize the fitness function F i t ( X i ) of each crow based on the updated position and their memories. While exploring the search space for new food positions, a crow needs to remember the best location in which it hid its own food, and should remain aware if other crows discover this location. Assuming that the i-th crow decides to visit a previously memorized position at iteration (t) ( M e m i , t ) , and assuming that congener (i) follows the crow (i), two controversial behaviors may occur, with each one represented by a particular state:
  • The first state is when the crow i ignores being followed, and simply continues its search, considering what it previously found ( M e m i , t )
  • The second state is when the crow is aware of being followed; in this case, the crow will simply hide its food source and undergo a completely random search.
  • These two position updates are detailed in Equation (3).
X i ( t 1 ) { / / State 1 If R j ( t ) A P j ( t ) then : X i ( t ) + R i ( t ) F l i ( t ) ( M j ( t ) X i ( t ) ) Else : / / State 2 : random update
where R i (t) is a random number with uniform distribution between the interval [0, 1] at iteration t, F l i ( t ) is the flight length of the crow i, and A P i ( t ) is the awareness probability of the crow i. In the CSA algorithm, the balance between exploration and exploitation during the optimization process is achieved by the flight length (Fl) of the ith crow. However, the memory M e m i ( t + 1 ) of each crow i is updated using Equation (4). All the optimization processes are executed until a predefined maximum number of iterations is reached.
M e m i ( t 1 ) = { X i ( t + 1 ) If F i t ( X i ( t + 1 ) ) F i t ( M e m i ( t ) ) M e m i ( t ) Otherwise

3.2. The Distributed Bi-Behaviours Crow Search Algorithm (DB-CSA)

The Distributed Bi-behaviours Crow Search Algorithm (DB-CSA) is based on a Beta distribution profiles inspired from the Beta distributed PSO [78] developed for solving the Inverse Kinematics problem and considered to enhance the exploitation and exploration of solutions in the search space. The DB-CSA algorithm is developed to solve both static Many-Objective Optimization Problems (MaOPs) and Dynamic Multi-Objective Optimization Problems (DMOPs). Considering the state-of-the-art methods, the main motivation of the proposed DB-CSA is as follows:
  • For MaOPs, the original meta-heuristics, such as the CSA algorithm [9] suffer from a robust mechanism to optimize a problem with a high number of objectives (more than three), which are optimized at the same time.
  • For DMOPs, most of the existing approaches cannot manage the dynamic change in decision variables and objective values in the POS and the POF, respectively.
To overcome both limits, two optimization processes are separately proposed to solve static MaOPs and DMOPs. The detailed description of the proposed DB-CSA algorithms are as follows:
  • The first variant (DB-CSA) has the same optimization process as the standard CSA algorithm [9] where the main difference is provided in the convergence and the diversity enhancement during the optimization process. A modified rules are considered in the first CSA algorithm to update the position of each crow. The first version of the proposed DB-CSA algorithm is developed to solve static MaOPs. The general flowchart is shown in Figure 1, and in the pseudo-code is given in Algorithm 2. More details are presented in Section 3.2.1.
  • The second variant (DB-CSA-II) is proposed based on the first DB-CSA algorithm. The main difference is that this investigates the dynamic optimization mechanism to efficiently detect and react to the change when solving DMOPs with a time-varying Pareto Optimal Set (POS) and dynamic Pareto Optimal Front (POF). The general flowchart of the second version is shown in Figure 2 and the pseudo-code is presented by Algorithm 3. More details are presented in Section 3.2.2.
Algorithm 2 The pseudo-code of the proposed Distributed Bi-behaviours Crow Search Algorithm (DB-CSA)
  • Begin
  • Randomly initialize the position (X) of the flock of N crows with d-dimensional search space;
  • Initialize the memory (Mem) of each crow;
  • Initialize the archive (A) to store the non-dominated solutions;
  • Evaluate the position of the crows;
  • while iteration < M a x I t e r a t i o n s do
  •     for i = 1 to i ≤ N  do
  •         Evaluate the fitness function of the crow i: F i t ( X i ( t ) )
  •         Choose the followed crow (i) randomly;
  •         Determine the average crow i: Mean ( F i t ( X i ( t ) ) )
  •             if F i t ( X i ( t ) ) ⩾ Mean( F i t ( X i ( t ) ) )then
  •                 Update the crow position using Equation (8) on Beta-1 exploitation profile;
  •             else
  •                 Update the crow position using Equation (8) on Beta-2 exploration profile;
  •             end if
  •         Update the memory using Equation (4);
  •     end For
  • Apply the mutation operators using Equations (9) and (10);
  • Update the archive (A) of non-dominated solutions;
  • end while
  • Return the archive (A) of the non-dominated solutions;
  • End

3.2.1. First Variant: DB-CSA for Static MaOPs

The key processing steps in the first variant of the proposed DB-CSA approach are shown in Figure 1 and detailed as follows:
1.
Initialization of population positions and their memories: In DB-CSA algorithm, each crow i is presented as a potential solution in the search space. The DB-CSA starts with a random initialization of position (X) and the memory (Mem) of the flock of N crows when each crow i has presented a potential solution in the search space.
2.
Initialization of the archive of the non-dominated solutions: the archive (A) is initially created to store all the non-dominated solutions during the optimization process. After that, all the following steps are executed until a predefined number of iterations is reached.
3.
Fitness Function Evaluation: for each crow i, the fitness function F i t ( X i ) is evaluated.
4.
Determine the followed crow i: at each iteration, one of the main behaviors of crow i is to determine one crow i to follow by selecting a random position value between zero and the size of the flock of best crows.
5.
Determine the average crow i: the aggregated value of K objectives are computed as the fitness function F i t ( X i ( t ) of each crow i; then, the average value of all fitness functions is selected to determine the mean solution.
6.
Update the crow position using the bi-behaviours’ beta-distribution profiles
7.
Update the memory (Mem): the memory of each crow i is updated using Equation (4).
8.
Apply the mutation operators
9.
Update the archive of non-dominated solutions: at each time t of the optimization procedure, all the non-dominated solutions are stored in the archive (A) based on the ε d o m i n a n c e operator.
10.
Generate OUTPUT = the best Pareto solutions from the archive (A).
Algorithm 3 The DB-CSA-II Algorithm with Dynamic Optimization Process
  • Begin
  • Randomly initialize the position Xof N crows with d-dimensional search space;
  • Initialize the memory (Mem) of each crow;
  • Initialize the archive (A) of non-dominated solutions;
  • Evaluate the position of the crows;
  • Initialize the flock of crows at iteration (t): P O S ( t ) ;
  • Initialize the previous flock of crows at iteration (t − 1): P O S ( t 1 ) ;
  • Initialize non-dominated solutions counter n p = 0 ;
  • Initialize dominated solutions counter n q = 0 ;
  • while iteration < M a x I t e r a t i o n s do
  • for i = 1 to i ≤ N  do
  •     Evaluate the fitness function of crow i: F i t ( X i ( t ) ) ;
  •      Set P O S ( t ) P O S ( t ) s o l u t i o n i ( t ) ;
  •      Set P O S ( t 1 ) P O S ( t 1 ) s o l u t i o n i ( t 1 )
  •      if Dynamic Change = True then
  •         for p = 1 to | P O S ( t ) | do
  •             for q = 1 to |POS( t 1 )| do
  •              Compare objective values P O S ( t ) and POS( t 1 ) using dominance operator;
  •                     if s o l u t i o n p dominates s o l u t i o n q then
  •                                 n p = n p + 1 ;
  •                                 New population P O S ( t + 1 ) P O S ( t + 1 ) s o l u t i o n p ;
  •                     else if s o l u t i o n q dominates s o l u t i o n p then
  •                                 n q = n q + 1 ;
  •                                Re-initialize s o l u t i o n p ;
  •                     end if
  •             end for
  •         end for
  •         Update the archive (A) ← n o n d o m i n a t e d s o l u t i o n s (POF(t));
  •     end if
  • Choose the followed crow (i) randomly;
  • Determine the average crow i: M e a n ( F i t ( X i ( t ) ) )
  • if F i t ( X i ( t ) ) M e a n ( F i t ( X i ( t ) ) then
  •     Update the crow’s position using Equation (8) on Beta-1 exploitation profile
  • else
  •     Update the crow’s position using Equation (8) on Beta-2 exploration profile
  • end if
  • Update the memory using Equation (4)
  • end For
  • Apply the mutation operators using Equations (9) and (10)
  • Update the archive (A) of non-dominated solutions
  • end while
  • Return the archive (A) of the non-dominated solutions
  • End
In the standard CSA algorithm, the crow position is updated according to the Equation (3), while the convergence and the diversity stages are treated separately, causing premature convergence. However, this issue was treated by the new DB-CSA algorithm using a bi-behaviours beta-distribution profile to assume a dynamic and good balance between both stages. The two beta-distribution profiles are presented in Equation (8) and Figure 3, denoted by Beta1_rand and Beta2_rand, respectively, for exploitation and exploration enhancement. The couple of beta-profiles are used to modify the original Equation (3) of the standard CSA algorithm, presenting the update process for each crow i. The two profiles were proposed based on the beta-function of Alimi [77], as presented in Equations (5)–(7). The main advantage in using the beta function presented here is their capacity to produce several forms and configurations of distributions, including the normal Gaussian one. The one-dimensional Beta function is defined in Equation (5).
β ( x , p , q , x 0 , x 1 ) = { x x 0 x c x 0 p x 1 x x 1 x c q If x [ x 0 , x 1 ] Otherwise
where p, q, x 0 and x 1 are real values, with ( x 0 < x 1 ) ∈ I R , and x c is detailed in Equation (6).
x c = ( p × x 1 ) + ( q × x 0 ) p + q
However, the multi-dimensional beta function provided in the mathematical definition (7), presenting m product of the one-dimensional beta function in (5).
β ( x ) = k = 1 m β ( x k , p k , q k , x 0 , k , , x 1 , k )
The dynamic switch mechanism between the bi-behaviors’ Beta-1 and Beta-2 profiles are assumed using a comparison between the fitness function F i t ( X i ( t ) ) of each crow i and the average solution (crow). If the fitness function F i t ( X i ( t ) ) = k = 1 k f k is greater than the mean value, we assume an exploration stage for the crow optimization process using the Beta-1 behaviour in Equation (8), which is used to update the crow position. Otherwise, the second Beta-2 behaviour in Equation (8) is considered, pushing each solution to the exploitation stage. As illustrated in Figure 3, the two beta-distribution profiles are detailed as follows:
  • The first large Gaussian Beta-1 exploitation profile is used to update the crows positions which is characterized by a large standard deviation pushing the population to achieve good diversity in the search space with p and q variables of the beta-function in Equation (8), which are equal to 5.
  • The second narrow Gaussian Beta-2 exploration profile adapts a limited standard deviation to update the crows positions with p and q in Equation (8), which are equal to 50 allowing for a good convergence to the optimal solution over time.
X i ( t + 1 ) = { / / Beta 1 Behaviour for exploitation profile : IF F i t ( X i ( t ) ) M e a n ( F i t ( X i ( t ) ) then : X i ( t ) + B e t a 1 _ r a n d ( i ) × ( M e m i ( t ) X i ( t ) ) Else : / / Beta 2 Behaviour for exploration profile : B e t a 2 _ r a n d ( )
where Beta-1 is a beta random distribution over [0, 1], which assimilates to a fine search step around the optimal solution, while Beta-2 is more like a random exploration mechanism performed away from the previous optimal solution, M e m i ( t ) . Both Beta-1 and Beta-2 values are determined using Equation (5) with different configurations of the two properties p and q.
The mutation operators in [79] are added to maintain more diversity in the flock of N crows. The nonuniform and the boundary mutation operators in Equations (9) and (10) are applied to modify the variables X i = X 1 i , X 2 i , , X d i of each crow i, according to the probability mutation P m equal to 1 / d , where d is the dimensional search space and X i [ a i , b i ] where a i and b i are the lower and upper bounds, respectively. The nonuniform mutation in Equation (9) is applied when the modulo value (mod) that divides the crow position i by three is equal to zero. However, if the remainder is equal to one, the boundary mutation in Equation (10) is used. Otherwise, all variables are considered without mutation operators.
X i = { X i + ( b i X i ) × r 1 × 1 i t e r a t i o n M a x i t e r a t i o n s b , if r 1 0.5 , i mod 3 = 0 X i + ( X i a i ) × r 2 × 1 i t e r a t i o n M a x i t e r a t i o n s b , if r 2 > 0.5 , i mod 3 = 0 X i , otherwise
where r 1 and r 2 are a random value of between 0 and 1.
X i = { a i , i f X i + ( r 0.5 × P m ) < a i , i mod 3 = 1 b i , i f X i + ( r 0.5 × P m ) b i , i mod 3 = 1 X i + ( r 0.5 × P m ) ; otherwise , where r = U ( 0 , 1 )

3.2.2. The Second Variant: DB-CSA for DMOPs, DB-CSA-II

The second variant (DB-CSA-II) of the proposed DB-CSA algorithm has the same optimization steps as the first variant (DB-CSA) and the main difference is in its investigating the dynamic optimization process to manage the time-varying change that occurs when solving DMOPs. The dynamic handling mechanism starts with the extraction of the population of crows of both the current iteration (t) and previous iteration (t − 1), denoted, respectively, as POS(t) and POS(t − 1). Then, the objective values are compared using the Pareto dominance operator [80]. Pareto dominance is a useful mechanism in multi-objective optimization to compare the evolution and deterioration of two solutions Solution p (t) in POS(t) and Solution q (t −1) from POS(t −1) based on their objective function vectors: F(Solution p (t)) = f 1 (t), …, f M (t) and F(Solution q (t −1)) = f 1 (t −1), …, f M (t −1). During the optimization process, solution Solution p (t) dominates Solution q (t − 1) if both dominance conditions are verified:
  • Check dominance condition (2): the S o l u t i o n p ( t ) is strictly better than S o l u t i o n q ( t 1 ) , F ( S o l u t i o n p ( t ) ) > F ( S o l u t i o n q ( t 1 ) ) for at least one objective.
  • Check the dynamic change: if the current solution S o l u t i o n p ( t ) dominates the previous S o l u t i o n q ( t 1 ) based on both conditions (1) and (2), so the dynamic change is successfully detected. The next step aims to effectively react to the dynamic change and to enhance all deterioration in the time-varying search space.
  • React to the dynamic change: all non-dominated S o l u t i o n p ( t ) are considered to create the next population at iteration (t + 1), and all deteriorated solutions S o l u t i o n p ( t ) are randomly re-initialized. Then, all non-dominated solutions in the archive (A) are updated. This process aims to enhance the convergence and diversity of crows in a dynamic search space. The general flowchart of the second version of DB-CSA algorithm is shown in Figure 2.

3.3. The Complexity Analysis of the Proposed DB-CSA Approach

The dynamic beta-distributed profiles are the main properties of the DB-CSA algorithm, investigating the use of beta function that provide a high flexibility to produce several forms of data distributions. Using both large Beta-1 and narrow Beta-2 functions provided the standard CSA with a new mechanism to assume a good population distribution toward the best approximated results. The advantage of the proposed DB-CSA algorithm is proved by their simplicity and robustness to balance between the convergence and the diversity in a static and dynamic search space. The time complexity is independent of the number of objective functions and the nature of the search space. The proposed DB-CSA algorithm is developed to optimize both static Many-Objective Optimization Problems (MaOPs) and Dynamic Multi- Objective Optimization Problems (DMOPs). In the worst case scenario, the time complexity is equal to O ( M × N l o g ( T ) ) + O ( M × N 2 ) where M=2 or O ( M × N l o g ( T ) ) + O ( N l o g ( M 2 ) N ) where M>2 and is obtained as follows: The initialization of the position for N crows with a d-dimensional search space takes O ( N × d ) . The initialization of the memory of N crows take a complexity time O ( N ) . The evaluation of the fitness function with M objectives for N crows takes O ( M × N ) . The process to rank and compare all solutions to determine the set of non-dominated solutions takes O ( M × N 2 ) , where M=2 is the number of objectives and it takes O ( N l o g ( M 2 ) N ) where M>2. The optimization process of the proposed DB-CSA algorithm is executed until the maximum number of iteration ( M a x I t e r a t i o n s ) is reached. At each iteration (t), the following steps are iterative repeated, including updating the positions, evaluating fitness functions, and updating the archive (A). In sum, the overall optimization process of the DB-CSA-I algorithm is equal to O ( M × N l o g ( T ) ) + ( O ( M × N 2 ) ) = O ( N × d ) + O ( M × N ) + O ( M × N 2 ) . However, the second version of the proposed DB-CSA-II algorithm includes a dynamic optimization process for solving DMOPs. The main difference between the first (DB-CSA-I) and the second version (DB-CSA-II) of the proposed DB-CSA algorithm is presented in the additional process to manage the dynamic change that occurs when solving DMOPs and takes a time complrexity equal to O ( M × N 2 ) . In sum, the overall complexity of the second version designed for DMOPs is equal to O ( M × N l o g ( T ) ) + O ( M × N 2 ) = O ( N × d ) + O ( M × N ) + O ( M × N ) + O ( M × N 2 ) .

4. Experimental Study

This section presents the experimental study and all the presented results are for two comparative studies, as detailed in Table 3 and Table 4:
  • The first was to compare the new proposed DB-CSA-II to a set of DMOEAs designed for Dynamic Multi-Objective Optimization Problems (DMOPs).
  • The second was for Many-Objective Optimization Problems (MaOPs).
  • Algorithms configurations and parameters are listed in Table 3 and Table 4 for DMOPs and MaOPs, respectively.

4.1. Quality Indicators

The performance measurements of all tested systems were carried out using the minimum values of the three quality indicators (QI), including the Inverted General Distance (IGD), the Mean Inverted General Distance (MIGD) and the Hypervolume Difference (HVD), which are presented, respectively, in Equations (11)–(13) respectively. All these metrics are used to measure both the convergence and diversity of the tested DMOEAs.
  • The Inverted General Distance (IGD) [11] in Equation (11) measures a Euclidean distance d(i,POF) between the ith points in the non-dominated solutions P O F * and the nearest approximated Pareto front (POF).
    I G D ( P O F t * , P O F t ) = i P O F * d ( i , P O F ) | P O F |
  • The Mean Inverted General Distance (MIGD) [11] is presented in Equation (12), presenting the average of the IGD values at each iteration t T .
    M I G D ( ( P O F t * , P O F t ) = 1 / T × t T I G D ( P O F t * , P O F t )
  • The Hypervolume Difference (HVD) [11] detailed in Equation (13) aims to compute the difference between the Hypervolume (HV) of the true P O F * and the approximated P O F
H V D = H V ( P O F t * ) H V ( P O F t )

4.2. Tested Benchmarks

Forty-four benchmarks were used to evaluate the relative performances of the proposed method upon the two scenarios. The twenty-one DMOPs test beds are as follows: five FDA [6], three dMOP [12], seven UDF [81] and six F(ZJZ) [14] functions. The twenty-three MaOP problems are composed of seven MaF test suite MaF1-7, seven DTLZ1-7 functions and nine WFG1-9 problems. The test configurations are detailed in Table 5 according to the number of variables (D) and objectives (M). For dynamic multi-objective optimization, Farina et al. [6] presented three types of DMOPs, which are classified into three categories according to the time-varying POF and POS. The DMOPs in type I have a dynamic change in the POS and the POF remains the stable. Both POS and POF are changed for the DMOPs in type II. However, type III of DMOP presents a time-varying POF, where the POS is unchanged. The main properties of all tested problems are reported in Table 5, presenting a variation in both POS and POF.

4.3. Experimental Settings

This section presents the experimental settings for all the compared state-of-the-art methods. To assume that the experimental studies have a fairness configuration, all the parameter settings for experimental studies (1) and (2) were fixed, referring to the original papers [11,15,54,55]. The experimental study was conducted using a personal computer with 8 Go of Ram and an i7 Intel processor. A Java implementation of the proposed method was performed on the jMetal framework [82]. Furthermore, the Taguchi method was used to select the meta-parameters of the beta-behaved profiles.

4.3.1. Comparative Study (1) for DMOPs

The first comparative test was performed for DMOPs using FDA, dMOP, UDF and F(ZJZ) benchmarks, with two and three objectives. Five standard MOEAs [11] and the six-transfer learning-based methods [15] were compared to the new proposal DB-CSA-II system. All the compared algorithms have the same parameters settings, referring to the original publications [11,15]. However, all DMOPs were characterized by a dynamic POS or/and POF according to the time-varying property t, which changes at each instance, as in Equation (14).
t = 1 n t × | τ τ t |
where n t , τ , and τ t are the severity of the change, the iteration counter and the frequency of the change, respectively. Three categories of environmental change were considered in this study and differentiated according to the values of n t , fixed to 10, and the variation in the frequency τ t . The property τ t was equal to 5, 10 and 20 for severe, moderate and slight environmental changes, respectively, as fixed in the original papers [11,15].
As resumed in Table 3, the swarm and the archive size were equal to 100, as fixed in [11,15]. All DMOEAs were independently executed 30 times, and each run was stopped when the maximum number of iterations was reached and computed as follows: M a x i t e r = 3 × n t × τ t + 50 . For each DMOP, the number of variables (D) and objectives (M) are fixed in Table 5.

4.3.2. Comparative Study (2) for MaOPs

The second experimental test was carried out for many-objective optimization, referring to the contributions [54,55], to compare the proposed DB-CSA approach to seven and thirteen Many Objective Evolutionary Algorithms (MaOEAs), respectively. As mentioned in Table 4, the population size was fixed according to the number of objectives (M). The seven and thirteen MaOEAs were executed during 30 and 31 independent runs, respectively. Each run was stopped when the maximum number of iterations ( M a x i t e r ) was reached. As per the recommendations in [55] ,the number of objectives (M) for both MaF and WFG test suites was set to 2, 3 and 7 and the number of variables (D) were computed as follows: D = M + K − 1, where k is set to 10 for MaF1-MaF6 and 20 for MaF7. However, the WFG test suite present the following configurations including; the number of decision variables (D) are equal to D=M+9, the number of position-related variables (K) are equal to K = M − 1, and the number of distance-related variables (L) set as L = D − k. Furthermore, in [54] both WGF and DTLZ functions were tested with 3, 5, 8, 10 and 15 objectives.

4.3.3. Taguchi Method for Orthogonal Experimental Design

Comparative studies were conducted using the Taguchi method [83] to analyse the sensitivity of the parameter design of the proposed DB-CSA and DB-CSA-II algorithms.The Taguchi method was provided by Genichi Taguchi in 2004 and aims to analyse the sensitivities of user-defined parameters and propose a reasonable combination of parameter designs, using the orthogonal arrays (OAs) mechanism. The Taguchi method aims to minimize the number of runs that are needed for the experimental study. OAs are denoted by L a ( b c ) , where a is the number of experimental runs, b is the number of levels of each factor, c is the number of columns in the array, and L denotes Latin square design. In this study, the DB-CSA algorithm consists of six key parameters with an array of two-level factors.
  • The swarm size is fixed with two-level factors (N), for DMOPs N 100 , 200 and MaOPs N 100 , 300 .
  • The maximum number of iterations is fixed with two-level factors, for DMOPs T m a x 250 , 350 for severe and moderate change and for MaOPs T m a x 100 , 25 , 000 .
  • Both parameters of the Beta-1 function (p1) and (q1) are fixed to 5 or 50.
  • Both parameters of the Beta-2 function (p2) and (q2) are fixed to 5 or 50.
Figure 4 presents a different data distribution according to the configurations of parameters p and q of the Beta function, which are fixed to 5 or 50. As mentioned in Table 6, the application of the Taguchi orthogonal arrays identified an array of L 8 ( 2 6 ) with only eight best runs for a different combination of parameter designs, with a two-level design for six control factors. The sensitivity of parameter design of the proposed DB-CSA algorithm was confirmed using the Taguchi method. As shown in Table 7, the parameters of Beta1 function with p1 and q1 are equal to 5, and those of Beta2 function with p2 and q2 are equal to 50, to achieve the smallest mean values for MIGD, IGD and HVD metrics and solve DMOPs with severe and moderate changes and MaOPs with seven objectives. However, the experimental study was carried out using the best configuration, obtained using the Taguchi method.

4.4. Results Analysis and Discussion

This subsection presents the analysis of comparative studies which is conducted through the non-parametric Wilcoxon sign rank test [84] and the box-plots of the one-way ANOVA test [85]. The statistical analysis methods were used to estimate the p-value property to determine the statistically significant difference between the compared methods.In this study case, the Wilcoxon sign rank test is used to compare two paired approaches and executed according to the following steps:
  • Step 1: define the null hypothesis ( H 0 ): the means of the paired approaches are the same.
  • Step 2: define ix, the alternative hypothesis ( H 1 ): the means of the paired approaches are different.
  • Step 3: compute the difference between the mean values.
  • Step 4: assign a rank for the results obtained in step 3.
  • Step 5: compute the number of ranks with negative values ( R ) and positive values ( R + ).
  • Step 6: compute the probability (p-value) that the null hypothesis is true and is computed based on the test statistic value (z-score) produced by the Wilcoxon Rank–Sum test.
If the p-value is less than or equal to 0.05, the statistical results are considered significantly important and we can conclude that the difference between the mean value of the paired approaches is statistically significant; otherwise, this difference is not statistically significant. All quantitative results are presented in the Supplementary file, (Tables S1–S9).

4.4.1. Analysis of the Comparative Study (1) for FDA and dMOP Problems

The comparative study (1) is first considered to compare the proposed DB-CSA-II algorithm to six transfer learning-based methods, the standard CSA algorithm and the first variant DB-CSA and solve FDA and dMOP test beds with severe ( τ t = 5, n t = 10) ,moderate ( τ t = n t = 10) and slight ( τ t = 20, n t = 10) environmental changes. The MIGD metric aims to measure the mean value of the obtained IGD values and is used to measure the convergence and diversity of the obtained POF compared to the true one. Table S1 (see the Supplementary File) presents the mean and standard deviation values through the MIGD metric; the efficiency of the new DB-CSA-II system shows the best mean and standard deviation values for all test suites with different environmental changes compared to the six transfer learning-based approaches and the standard CSA algorithm. Based on the statistical results obtained using the Wilcoxon signed rank test on the following Table 8, we can determine the importance of the new DB-CSA-II with a p-value of less than 0.05, defining a significant difference in the mean values compared to MMTL-MOEA/D, KF-MOEA/D, PPS-MOEA/D, SVR-MOEA/D, Tr-MOEA/D, and RI-MOEA/D approaches. The one-way ANOVA box-plots in Figure 5 determines the importance of the DB-CSA-II algorithm compared to six transfer-learning-based methods. Compared with the DB-CSA-II algorithm, which aims to control the evolution and deterioration of optimal solutions, most TL-based approaches were designed to predict the new population after each dynamic change, but tend to be inefficient when managing dynamic change in both POS and POF and balancing convergence and diversity within dynamic search spaces.
Second, the five standard MOEAs (DNSGA-II, dCOEA, PPS, MOEA/D, and SGEA) were compared to the new DB-CSA-II algorithm. The average and the standard deviation values for both FDA and dMOP test suites over the IGD and HVD metrics, respectively, can be seen in Tables S2 and S3 of the Supplementary File . Based on the IGD metric on Table S2, we can argue the superiority of DB-CSA-II method compared to five standard MOEAs designed for dynamic multi-objective optimization. The results based on Wilcoxon signed rank test are presented in Table 9, indicating that DB-CSA-II is the best method when compared to IGD at a statistically significant level of 0.05 compared to other MOEAs. The same conclusion is confirmed using the box plot over a one-way ANOVA test in Figure 6.
Tables S3 reports the quantitative results using an HVD quality indicator. We can conclude that the proposed DB-CSA is the winner when solving different types of DMOPs, including FDA1 in type I, with dynamic POS and static POF, FDA3, FDA5, and dMOP2 in type II, with time-varying POS and POF and dMOP1, and in type III, with unchangeable POS and dynamic POF, regarding all categories of environmental changes. Meanwhile the DB-CSA-II obtained similar results to the SGEA system when solving FDA2 function in type II, characterized by a dynamic density of the solutions and a cyclic change of the POF from convex to concave as well as the FDA4 problem in type I characterized by a time-varying spread of solutions in a severe dynamic change. In addition, the dCOEA algorithm has a closed mean value for solving the dMOP3 function, characterized by the static curvature of the estimated POF and dynamic spread of the solution set compared to the proposed DB-CSA-II.
Table 9 shown the negative and positive Wilcoxon ranks, we can conclude that the DB-CSA-II algorithm is the best method compared to other comparable approaches based on the HVD quality indicator. This importance does not determine statistically significance with a p-value greater than 0.05. The one-way ANOVA results in Figure 7 assuming the competitive importance of DNSGA-II, dCOEA, PPS, MOEA/D, and SGEA to solve FDA and dMOPs test functions with 2 and 3 objectives, including the different environmental changes found using the HVD metric.

4.4.2. Analysis of the Comparative Study (1) for UDF and F Problems

Considering the quantitative results of the Unconstrained Dynamic Functions (UDF1-UDF7) in Table S4 of the Supplementary File, it appears that DB-CSA-II has the greatest values for all UDF functions. Furthermore, we can resume the stability of the new DB-CSA algorithm when solving the tri-objective problem (F8) and the bi-objectives function (F10) over IGD metrics compared to the Population Prediction Strategy (PPS) approach, which is only performed to solve the F5, F6, F7 and F9 test functions. However, the F(ZJZ) problems provide a complex benchmark, including a time-varying POF and POS with a nonlinear correlation between the decision variables. Based on the Wilcoxon sign rank in Table 9, we assume that the DB-CSA-II is the best method; however, this importance does not present a high statistical significance, with p-values greater than 0.05 compared to the five MOEAs over the IGD metric.
Based on the HVD results reported in Table S5 of the Supplementary File, the DB-CSA-II obtains good results for the majority of UDF benchmarks, and only fails when solving the disconnected UDF6 compared to the DNSGA-II system. However, we can assume that the PPS system is important for solving F5, F7 and F10 and the SGEA for F6 and F9. The Wilcoxon signed rank test presented in Table 9 shows the same, statistically significant results between all compared algorithms with a p-value exceeding a 0.05 significance level. Figure 8 reported the one-way ANOVA results in a box plot of the six MOEAs over IGD and HVD metrics. Figure 9, Figure 10 and Figure 11 present the plot of the MIGD, IGD and HVD values of the proposed DB-CSA-II algorithm to solve FDA and dMOP problems with severe, moderate and slight changes.

4.4.3. Analysis of the comparative study (2) for MaF and WFG problems with 2, 3 and 7 objectives

For the second comparative study (2), thirteen multiple-objective evolutionary approaches (MSOPS-II, MOEA/D, HypE, PICEA-g, SPEA/SDE, GrEA, NSGA-III, KnEA, RVEA, two_Arch2, θ -DEA, MOEA/DD, AnD) are first compared to the new proposed DB-CSA system based on a set of multiple-objective optimization problems, as denoted by the MaF and WFG test suites, with 2, 3 and 7 objectives, including different numbers of decision variables, as detailed in Table 5. The results are reported in Table S6 of the Supplementary File, showing the IGD results of the 14 compared Many-Objective Evolutionary Algorithms regarding their ability to solve nine MaOPs (WFG1-WFG9), characterized by the dynamic shape of the POF, which changes from convex to concave.
The DB-CSA algorithm was first ranked to solve seven WFG test suites from nine (7/9), including WFG1, WFG3, WFG4, WFG5, WFG6, WFG8 and WFG9 and failed only for solving WFG2 compared to HypE and θ -DEA, which have almost the same mean values as the IGD metric for WFG7 when the number of objectives is equal to 2. By increasing the number of objectives to 3 and 7, the WFG becomes more complex and the issue of a lack of convergence and diversity presents a challenging task. Based on the reported IGD values of the tri-objective WFG functions in Table S6 of the Supplementary File, we can conclude the efficiency of the newly proposed DB-CSA approach to deal with the increasing number of objectives. Table S6, showed the best values for MaOPS, with seven objectives.
In addition, Table S7 of the Supplementary File shows the mean and the standard deviation values over the IGD metric to solve the MaF test suite (MaF1-MaF7) with 2, 3 and 7 objective functions. Figure 12 presents the approximated POF for the MaF test suite. The new DB-CSA is presented a good method for solving the MaF test suite compared to the thirteen state-of-the-art MaOEAs. Table 10 shows the importance of DB-CSA over the Wilcoxon signed rank test, while all the computed p-values are less than 0.05, assuming the statistically significant difference in DB-CSA compared to the thirteen MaOEAs, including MSOPS-II, MOEA/D, HypE, PICEA-g, SPEA/SDE, GrEA, NSGA-III, KnEA, RVEA, two_Arch2, θ -DEA, MOEA/DD, AnD, to solve the MaF test suite with the 2, 3 and 7 compared objectives. The dynamic treatment of both convergence and diversity concepts is very useful when solving a set of complex MaOPs with a high number of objectives.

4.4.4. Analysis of the comparative study (2) for DTLZ and WFG problems with 3, 5, 8, 10 and 15 objectives

In the second part of the comparative study (2), seven MaOEAs (PMEA-MA, PMEA*-MA, SPEA2/SDE, NSGA-II/SDR, MaOEA/IGD, VaEA, SPEA) were compared to the new DB-CSA approach to solve a set of complex DTLZ and WFG test suites with 3, 5, 8, 10 and 15 objectives. Figure 13 presents the box-plots of all comparable approaches based on the one-way ANOVA test and the obtained figures show the importance of the DB-CSA algorithm for solving the WFG tests with 3, 5 and 15 objectives. Some qualitative results are presented in Figure 14 and Figure 15 to present the estimated POF of the true optimal solutions for both WFG and DTLZ, with 10 and 15 objectives, respectively. However, all quantitative results are given in Tables S8 and S9 of the Supplementary File, presenting the efficiency of the new DB-CSA approach compared to the IGD metric to solve the complex set of the nine tested WFG1-9 problems and seven DTLZ1-7 functions, respectively. However, this difference is reported as being particularly statistically significant when using the Wilcoxon signed rank test with a 0.05 significance level, as detailed in Table 10, where all computed p-values are less than 0.05.
As a global conclusion and based on comparative studies (1) and (2), all quantitative results showed the efficiency of DB-CSA and DB-CSA-II variants and their flexibility in solving eight DMOPs (FDA and dMOP) with 2 and 3 objectives, including several types of time-varying POF and POS, compared to the seven transfer-learning based methods (MMTL-MOEA/D, KF-MOEA/D, PPS-MOEA/D, SVR-MOEA/D, Tr-MOEA/D, and RI-MOEA/D) using the MIGD metric. By considering the plot of the MIGD, IGD and HVD values in Figure 9, Figure 10 and Figure 11 during 30 independent runs, we can determine the importance of DB-CSA for solving DMOPs in types I (FDA1, FDA4, dMOP3), II (dMOP2) and III (dMOP1). By analyzing the perturbation of MIGD, IGD and HVD plots, we can see the challenging results obtained when solving FDA1, FDA4, dMOP1, dMOP2 and dMOP3 compared to FDA5 and FDA3 in type II with time-varying POF and POS in both severe and moderate search spaces, and FDA5, FDA3 and FDA2 with a slight change.
The efficiency of DB-CSA-II is demonstrated when solving a dynamic tri-objective FDA4 with dynamic POS. However, the proposed DB-CSA-II algorithm assumed a competitive importance compared to the five standard MOEAs (DNSGA-II, dCOEA, PPS, MOEA/D and SGEA) when solving five FDA functions and three dMOP problems over the IGD metric, including different types of environmental changes. This contradicts the HVD metric when all results are not statistically significant at the level of 0.05. Furthermore, the importance of DB-CSA does not assume a high significance level compared to the five standard MOEAs when solving seven UDF and six F problems in type II with a time-varying POF and POS in a moderate environmental change.
Finally, we can assume the importance of the DB-CSA algorithm compared to 13 MaOEAs including; MSOPS-II, MOEA/D, HypE, PICEA-g, SPEA/SDE, GrEA, NSGA-III, KnEA, RVEA, Two_Arch2, θ -DEA, MOEA/DD, AnD for solving a set of many-objective optimization problems (9 WFG and 7 MaF) with 2, 3 and 7 objectives. Also the proposal has achieved the best results for solving the more complex DTLZ and WFG test suites with 3, 5, 8, 10 and 15 objectives compared to the seven MaOEAs (PMEA-MA, PMEA*-MA, SPEA2/SDE, NSGA-II/SDR, MaOEA/IGD, VaEA, SPEA). The main weakness of the proposed DB-CSA-II algorithm is presented when solving DMOPs in type I and II (FDA1, FDA3 and FDA4) characterized by a time-varying POS and POF, a dynamic spread or the dynamic density of the approximated solution set, with a nonlinear correlation between the decision variables. The high number of objectives also leads to the need for additional computational resources and increases the execution time.

4.4.5. Time Processing Cost

The time needed to process the proposed DB-CSA-II variant was computed to solve the 5 FDA and 3 dMOP test suites, as shown in Table 11. The DB-CSA-II algorithm was compared to six state-of-the-art transfer-learning-based approaches: (MMTL-MOEA/D, KF-MOEA/D, PPS-MOEA/D, SVR-MOEA/D, Tr-MOEA/D, DB-CSA-II). The run-time of the six transfer learning approaches was obtained from the original paper [15] where the important values are in bold. The PPS-MOEA/D algorithm was found to be fast for solving FDA1, FDA2, and FDA3 test beds. The SVR-MOEA/D approach has the fastest running time for FDA4, the MMTL-MOEA/D algorithm for both FDA5, dMOP1, and the KF-MOEA/D method for solving dMOP2 and dMOP3. We can conclude that the novel DB-CSA-II algorithm is not very fast in terms of computation time compared to the state of the art methods. However, the robust performance of the proposed DB-CSA-II algorithm is proved by the obtained means and standard deviation values based on the MIGD metric. The time comparison should be moderated, since it depends on the processor capacities as well as the hardware configurations used to conduct the tests.

5. Conclusions and Perspectives

In this paper, a new Distributed Bi-behaviors Crow Search Algorithm (DB-CSA) is proposed for the dynamic treatment of both convergence and diversity concepts, based on two new mechanisms: distributed bi-behavior profiles, characterized by a large Gaussian Beta-1 and narrow Gaussian Beta-2 functions for exploitation and exploration enhancement, respectively. All quantitative results were analyzed using the non-parametric Wilcoxon signed rank test with a 0.05 significance level. The experimental studies showed that the proposed DB-CSA is significantly better than the state-of-the-art methods. The novel DB-CSA-II algorithm achieved good results for solving dynamic multi-objective problems characterized by different types of dynamic change in the POS and the POF including 2 or 3 conflicting objective functions. The comparative study (1) included seven transfer-learning based methods (MMTL-MOEA/D, KF-MOEA/D, PPS-MOEA/D, SVR-MOEA/D, Tr-MOEA/D, and RI-MOEA/D) used the MIGD metric and the five popular DMOEAs (DNSGA-II, dCOEA, PPS, MOEA/D and SGEA) to solve twenty-one DMOPs with different types of changes on both POF and POS usign the IGD and HVD quality indicators and it is proved that the proposal relative results are better for all test beds. Based on the comparative study (2), we can resume the efficiency of DB-CSA system compared to thirteen MaOEAs (MSOPS-II, MOEA/D, HypE, PICEA-g, SPEA/SDE, GrEA, NSGA-III, KnEA, RVEA, Two_Arch2, θ -DEA, MOEA/DD, AnD) for solving sixteen many-objective optimization problems (9 WFG and 7 MaF) with 2, 3 and 7 objectives, as well as the more complex DTLZ and WFG test suites with 3, 5, 8, 10 and 15 objectives compared to the seven MaOEAs (PMEA-MA, PMEA*-MA, SPEA2/SDE, NSGA-II/SDR, MaOEA/IGD, VaEA, SPEA). All results confirmed the relevance of the proposed DB-CSA approach and its capacity to correctly manage convergence and diversity concepts when solving DMOPs and MaOPS. For future works, it is recommended to investigate the impact that the beta-profiles have on performances when solving a DMOP characterized by a time-varying POS and POF, a dynamic spread or the dynamic density of the approximated solution set with a nonlinear correlation between the decision variables. Both variants of the DB-CSA method are worthy of consideration when solving a set of Evolutionary Transfer Multi/Many-objective Optimization Problems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app1010000/s1, Table S1: MIGD results (Mean and Standard Deviation) of the 6 DMOEAs [15] compared to the standard CSA and the proposed DB-CSA algorithm for solving FDA and dMOP functions; Table S2: IGD results (Mean and Standard Deviation) of the 5 DMOEAs [11] compared to the standard CSA and the proposed DB-CSA algorithm for solving FDA and dMOP functions; Table S3: HVD results (Mean and Standard Deviation) of the 5 DMOEAs [11] compared to the standard CSA and the proposed DB-CSA algorithm for solving FDA and dMOP functions; Table S4: IGD results (Mean and Standard Deviation) of the 5 DMOEAs [11] compared to the standard CSA and the proposed DB-CSA algorithm for solving UDF and F(ZJZ) functions with ( τ t = n t = 10); Table S5: HVD results (Mean and Standard Deviation) of the 5 DMOEAs [11] compared to the standard CSA and the proposed DB-CSA algorithm for solving UDF and F(ZJZ) functions with ( τ t = n t = 10); Table S6: IGD results (Mean and Standard Deviation) of the 13 MOEAs [55] compared to the standard CSA and the proposed DB-CSA algorithm on the 2, 3 and 7 objective WFG problems; Table S7: IGD results (Mean and Standard Deviation) of the 13 MOEAs [55] compared to the standard CSA and the proposed DB-CSA algorithm on the 2, 3 and 7 objective MaF problems; Table S8: IGD results (Mean and Standard Deviation) of the 7 MOEAs [54] compared to DB-CSA on the WFG test suite; Table S9: IGD results (Mean and Standard Deviation) of the 7 MOEAs [54] compared to the standard CSA and the proposed DB-CSA algorithm on the DTLZ test suite.

Author Contributions

Conceptualization, A.A.; Formal analysis, A.A.; Funding acquisition, B.N. and Z.A.B.; Investigation, A.A. and Z.A.B.; Methodology, A.A., N.R. and S.M.; Project administration, N.R., B.N. and A.M.A.; Resources, Z.A.B.; Supervision, N.R., B.N. and A.M.A.; Validation, N.R., B.N. and Z.A.B.; Visualization, A.A. and S.M.; Writing—original draft, A.A.; Writing—review & editing, A.A., N.R., B.N., Z.A.B., S.M. and A.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results has received funding from the Ministry of Higher Education and Scientific Research of Tunisia under the grant agreement number LR11ES48.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

As an alternative, the data has been shared on the Mendeley Data Repository and will be public to the community at the following DOI Link: 10.17632/hydzpsv4tp.2.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deb, K.; Rao N, U.B.; Karthik, S. Dynamic multi-objective optimization and decision-making using modified NSGA-II: A case study on hydro-thermal power scheduling. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2007; pp. 803–817. [Google Scholar] [CrossRef]
  2. Aboud, A.; Fdhila, R.; Alimi, A. Dynamic Multi Objective Particle Swarm Optimization Based on a New Environment Change Detection Strategy. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2017; Volume 10637, pp. 258–268. [Google Scholar] [CrossRef]
  3. Aboud, A.; Fdhila, R.; Alimi, A. MOPSO for dynamic feature selection problem based big data fusion. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016—Conference Proceedings, Budapest, Hungary, 9–12 October 2016; pp. 3918–3923. [Google Scholar] [CrossRef]
  4. Aboud, A.; Rokbani, N.; Fdhila, R.; Qahtani, A.M.; Almutiry, O.; Dhahri, H.; Hussain, A.; Alimi, A.M. DPb-MOPSO: A dynamic Pareto bi-level Multi-objective Particle Swarm Optimization Algorithm. App. Soft Comput. 2022, 109622. [Google Scholar] [CrossRef]
  5. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  6. Farina, M.; Deb, K.; Amato, P. Dynamic multiobjective optimization problems: Test cases, approximations, and applications. IEEE Trans. Evol. Comput. 2004, 8, 425–442. [Google Scholar] [CrossRef]
  7. Ou, J. A pareto-based evolutionary algorithm using decomposition and truncation for dynamic multi-objective optimization. Appl. Soft Comput. 2019, 85, 105673. [Google Scholar] [CrossRef]
  8. Zou, J.; Li, Q.; Yang, S.; Bai, H.; Zheng, J. A prediction strategy based on center points and knee points for evolutionary dynamic multi-objective optimization. Appl. Soft Comput. 2017, 61, 806–818. [Google Scholar] [CrossRef]
  9. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  10. Geem, Z.; Kim, J.; Loganathan, G. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  11. Jiang, S.; Yang, S. A Steady-State and Generational Evolutionary Algorithm for Dynamic Multiobjective Optimization. IEEE Trans. Evol. Comput. 2017, 21, 65–82. [Google Scholar] [CrossRef]
  12. Goh, C.; Tan, K. A competitive-cooperative coevolutionary paradigm for dynamic multiobjective optimization. IEEE Trans. Evol. Comput. 2009, 13, 103–127. [Google Scholar] [CrossRef]
  13. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  14. Zhou, A.; Jin, Y.; Zhang, Q. A Population prediction strategy for evolutionary dynamic multiobjective optimization. IEEE Trans. Cybern. 2014, 44, 40–53. [Google Scholar] [CrossRef]
  15. Jiang, M.; Wang, Z.; Qiu, L.; Guo, S.; Gao, X.; Tan, K. A Fast Dynamic Evolutionary Multiobjective Algorithm via Manifold Transfer Learning. IEEE Trans. Cybern. 2020, 51, 3417–3428. [Google Scholar] [CrossRef] [PubMed]
  16. Cao, L.; Xu, L.; Goodman, E.; Bao, C.; Zhu, S. Evolutionary Dynamic Multiobjective Optimization Assisted by a Support Vector Regression Predictor. IEEE Trans. Evol. Comput. 2020, 24, 305–319. [Google Scholar] [CrossRef]
  17. Jiang, M.; Huang, Z.; Qiu, L.; Huang, W.; Yen, G. Transfer Learning-Based Dynamic Multiobjective Optimization Algorithms. IEEE Trans. Evol. Comput. 2018, 22, 501–514. [Google Scholar] [CrossRef]
  18. Muruganantham, A.; Tan, K.; Vadakkepat, P. Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction. IEEE Trans. Cybern. 2016, 46, 2862–2873. [Google Scholar] [CrossRef]
  19. Purshouse, R.; Fleming, P. On the evolutionary optimization of many conflicting objectives. IEEE Trans. Evol. Comput. 2007, 11, 770–784. [Google Scholar] [CrossRef]
  20. Hughes, E.J. Multiple single objective Pareto sampling. In Proceedings of the 2003 Congress on Evolutionary Computation, Canberra, Australia, 8–12 December 2003; Volume 4, pp. 2678–2684. [Google Scholar] [CrossRef]
  21. Hughes, E.J. MSOPS-II: A general-purpose many-objective optimiser. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, CEC 2007, Singapore, 25–28 September 2007; pp. 3944–3951. [Google Scholar] [CrossRef]
  22. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef]
  23. Liu, H.; Gu, F.; Zhang, Q. Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef]
  24. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  25. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 2015, 19, 694–716. [Google Scholar] [CrossRef]
  26. Lu, X.; Tan, Y.; Zheng, W.; Meng, L. A Decomposition Method Based on Random Objective Division for MOEA/D in Many-Objective Optimization. IEEE Access 2020, 8, 103550–103564. [Google Scholar] [CrossRef]
  27. Bader, J.; Zitzler, E. HypE: An algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 2011, 19, 45–76. [Google Scholar] [CrossRef] [PubMed]
  28. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
  29. Zitzler, E.; Künzli, S. Indicator-based selection in multiobjective search. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer: Berlin/ Heidelberg, Germany, 2004; Volume 3242, pp. 832–842. [Google Scholar] [CrossRef]
  30. Feng, S.; Wen, J. An Evolutionary Many-Objective Optimization Algorithm Based on IGD Indicator and Region Decomposition. In Proceedings of the 2019 15th International Conference on Computational Intelligence and Security, CIS 2019, Macau, China, 13–16 December 2019; pp. 206–210. [Google Scholar] [CrossRef]
  31. Sun, Y.; Yen, G.; Yi, Z. IGD Indicator-Based Evolutionary Algorithm for Many-Objective Optimization Problems. IEEE Trans. Evol. Comput. 2019, 23, 173–187. [Google Scholar] [CrossRef]
  32. Zou, X.; Chen, Y.; Liu, M.; Kang, L. A new evolutionary algorithm for solving many-objective optimization problems. IEEE Trans. Syst. Man, Cybern. Part B Cybern. 2008, 38, 1402–1412. [Google Scholar] [CrossRef]
  33. Hadka, D.; Reed, P. Borg: An auto-adaptive many-objective evolutionary computing framework. Evol. Comput. 2013, 21, 231–259. [Google Scholar] [CrossRef]
  34. Gaoping, W.; Huawei, J. Fuzzy-dominance and its application in evolutionary many objective optimization. In Proceedings of the CIS Workshops 2007, 2007 International Conference on Computational Intelligence and Security Workshops, Harbin, China, 15–19 December 2007; pp. 195–198. [Google Scholar] [CrossRef]
  35. Yang, S.; Li, M.; Liu, X.; Zheng, J. A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  36. Yuan, Y.; Xu, H.; Wang, B.; Yao, X. A New Dominance Relation-Based Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 16–37. [Google Scholar] [CrossRef]
  37. Pierro, F.; Khu, S.; Savić, D. An investigation on preference order ranking scheme for multiobjective evolutionary optimization. IEEE Trans. Evol. Comput. 2007, 11, 17–45. [Google Scholar] [CrossRef]
  38. Adra, S.; Fleming, P. Diversity management in evolutionary many-objective optimization. IEEE Trans. Evol. Comput. 2011, 15, 183–195. [Google Scholar] [CrossRef]
  39. Li, M.; Yang, S.; Liu, X. Shift-based density estimation for pareto-based algorithms in many-objective optimization. IEEE Trans. Evol. Comput. 2014, 18, 348–365. [Google Scholar] [CrossRef]
  40. Zhang, X.; Tian, Y.; Jin, Y. A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2015, 19, 761–776. [Google Scholar] [CrossRef]
  41. Wang, R.; Purshouse, R.; Fleming, P. Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans. Evol. Comput. 2013, 17, 474–494. [Google Scholar] [CrossRef]
  42. Praditwong, K.; Yao, X. A new multi-objective evolutionary optimisation algorithm: The two-archive algorithm. In Proceedings of the 2006 International Conference on Computational Intelligence and Security, ICCIAS 2006, Guangzhou, China, 3–6 November 2006; Volume 1, pp. 286–291. [Google Scholar] [CrossRef]
  43. Wang, H.; Jiao, L.; Yao, X. Two Arch2: An Improved Two-Archive Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2015, 19, 524–541. [Google Scholar] [CrossRef]
  44. Carvalho, A.; Pozo, A. Measuring the convergence and diversity of CDAS Multi-Objective Particle Swarm Optimization Algorithms: A study of many-objective problems. Neurocomputing 2012, 75, 43–51. [Google Scholar] [CrossRef]
  45. Castro, O.; Pozo, A. A MOPSO based on hyper-heuristic to optimize many-objective problems. In Proceedings of the 2014 IEEE Symposium on Swarm Intelligence, Proceedings, Orlando, FL, USA, 9–12 December 2014; pp. 251–258. [Google Scholar] [CrossRef]
  46. Sun, X.; Chen, Y.; Liu, Y.; Gong, D. Indicator-based set evolution particle swarm optimization for many-objective problems. Soft Comput. 2016, 20, 2219–2232. [Google Scholar] [CrossRef]
  47. Hu, W.; Yen, G.; Luo, G. Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System. IEEE Trans. Cybern. 2017, 47, 1446–1459. [Google Scholar] [CrossRef] [PubMed]
  48. Maltese, J.; Ombuki-Berman, B.; Engelbrecht, A. Pareto-based many-objective optimization using knee points. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation, CEC 2016, Vancouver, BC, Canada, 24–29 July 2016; pp. 3678–3686. [Google Scholar] [CrossRef]
  49. Xiang, Y.; Zhou, Y.; Chen, Z.; Zhang, J. A Many-Objective Particle Swarm Optimizer with Leaders Selected from Historical Solutions by Using Scalar Projections. IEEE Trans. Cybern. 2020, 50, 2209–2222. [Google Scholar] [CrossRef]
  50. Leung, M.; Coello, C.; Cheung, C.; Ng, S.; Lui, A. A hybrid leader selection strategy for many-objective particle swarm optimization. IEEE Access 2020, 8, 189527–189545. [Google Scholar] [CrossRef]
  51. Ma, L.; Huang, M.; Yang, S.; Wang, R.; Wang, X. An Adaptive Localized Decision Variable Analysis Approach to Large-Scale Multiobjective and Many-Objective Optimization. IEEE Trans. Cybern. 2021. [Google Scholar] [CrossRef] [PubMed]
  52. Xiang, Y.; Zhou, Y.; Li, M.; Chen, Z. A Vector Angle-Based Evolutionary Algorithm for Unconstrained Many-Objective Optimization. IEEE Trans. Evol. Comput. 2017, 21, 131–152. [Google Scholar] [CrossRef]
  53. Tian, Y.; Cheng, R.; Zhang, X.; Su, Y.; Jin, Y. A Strengthened Dominance Relation Considering Convergence and Diversity for Evolutionary Many-Objective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 331–345. [Google Scholar] [CrossRef]
  54. Liu, Y.; Zhu, N.; Li, M. Solving Many-Objective Optimization Problems by a Pareto-Based Evolutionary Algorithm with Preprocessing and a Penalty Mechanism. IEEE Trans. Cybern. 2020, 51, 5585–5594. [Google Scholar] [CrossRef]
  55. Li, K.; Wang, R.; Zhang, T.; Ishibuchi, H. Evolutionary Many-Objective Optimization: A Comparative Study of the State-of-The-Art. IEEE Access 2018, 6, 26194–26214. [Google Scholar] [CrossRef]
  56. Jiang, S.; Yang, S. A strength pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization. IEEE Trans. Evol. Comput. 2017, 21, 329–346. [Google Scholar] [CrossRef]
  57. Chen, H.; Cheng, R.; Pedrycz, W.; Jin, Y. Solving Many-Objective Optimization Problems via Multistage Evolutionary Search. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3552–3564. [Google Scholar] [CrossRef]
  58. Meraihi, Y.; Gabis, A.; Ramdane-Cherif, A.; Acheli, D. A comprehensive survey of Crow Search Algorithm and its applications. Artif. Intell. Rev. 2021, 54, 2669–2716. [Google Scholar] [CrossRef]
  59. Nobahari, H.; Bighashdel, A. MOCSA: A Multi-Objective Crow Search Algorithm for Multi-Objective optimization. In Proceedings of the 2nd Conference on Swarm Intelligence and Evolutionary Computation, CSIEC 2017–Proceedings, Kerman, Iran, 7–9 March 2017; pp. 60–65. [Google Scholar] [CrossRef]
  60. John, J.; Rodrigues, P. MOTCO: Multi-objective Taylor Crow Optimization Algorithm for Cluster Head Selection in Energy Aware Wireless Sensor Network. Mob. Netw. Appl. 2019, 24, 1509–1525. [Google Scholar] [CrossRef]
  61. Souza, R.; Coelho, L.; MacEdo, C.; Pierezan, J. A V-Shaped Binary Crow Search Algorithm for Feature Selection. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar] [CrossRef]
  62. Laabadi, S.; Naimi, M.; Amri, H.; Achchab, B. A Binary Crow Search Algorithm for Solving Two-dimensional Bin Packing Problem with Fixed Orientation. Procedia Comput. Sci. 2020, 167, 809–818. [Google Scholar] [CrossRef]
  63. Coelho, L.S.; Richter, C.; Mariani, V.; Askarzadeh, A. Modified crow search approach applied to electromagnetic optimization. In Proceedings of the 2016 IEEE Conference on Electromagnetic Field Computation (CEFC), Miami, FL, USA, 13–16 November 2016. [Google Scholar] [CrossRef]
  64. Gupta, D.; Rodrigues, J.; Sundaram, S.; Khanna, A.; Korotaev, V.; Albuquerque, V. Usability feature extraction using modified crow search algorithm: A novel approach. Neural Comput. Appl. 2020, 32, 10915–10925. [Google Scholar] [CrossRef]
  65. Mohammadi, F.; Abdi, H. A modified crow search algorithm (MCSA) for solving economic load dispatch problem. Appl. Soft Comput. J. 2018, 71, 51–65. [Google Scholar] [CrossRef]
  66. Cuevas, E.; Espejo, E.B.; Enríquez, A.C. A modified crow search algorithm with applications to power system problems. In Studies in Computational Intelligence; Springe: Cham, Switzerland, 2019; Volume 822, pp. 137–166. [Google Scholar] [CrossRef]
  67. Huang, K.W.; Girsang, A.S.; Wu, Z.X.; Chuang, Y.W. A Hybrid Crow Search Algorithm for Solving Permutation Flow Shop Scheduling Problems. Appl. Sci. 2019, 9, 1353. [Google Scholar] [CrossRef]
  68. Díaz, P.; Pérez-Cisneros, M.; Cuevas, E.; Avalos, O.; Gálvez, J.; Hinojosa, S.; Zaldivar, D. An Improved Crow Search Algorithm Applied to Energy Problems. Energies 2018, 11, 571. [Google Scholar] [CrossRef]
  69. Meddeb, A.; Amor, N.; Abbes, M.; Chebbi, S. A Novel Approach Based on Crow Search Algorithm for Solving Reactive Power Dispatch Problem. Energies 2018, 11, 3321. [Google Scholar] [CrossRef]
  70. Javidi, A.; Salajegheh, E.; Salajegheh, J. Enhanced crow search algorithm for optimum design of structures. Appl. Soft Comput. J. 2019, 77, 274–289. [Google Scholar] [CrossRef]
  71. Bhullar, A.; Kaur, R.; Sondhi, S. Enhanced crow search algorithm for AVR optimization. Soft Comput. 2020, 24, 11957–11987. [Google Scholar] [CrossRef]
  72. Cuevas, E.; Gálvez, J.; Avalos, O. An Enhanced Crow Search Algorithm Applied to Energy Approaches. In Studies in Computational Intelligence; Springer: Cham, Switzerland, 2020; Volume 854, pp. 27–49. [Google Scholar] [CrossRef]
  73. Moghaddam, S.; Bigdeli, M.; Moradlou, M.; Siano, P. Designing of stand-alone hybrid PV/wind/battery system using improved crow search algorithm considering reliability index. Int. J. Energy Environ. Eng. 2019, 10, 429–449. [Google Scholar] [CrossRef]
  74. Arora, S.; Singh, H.; Sharma, M.; Sharma, S.; Anand, P. A New Hybrid Algorithm Based on Grey Wolf Optimization and Crow Search Algorithm for Unconstrained Function Optimization and Feature Selection. IEEE Access 2019, 7, 26343–26361. [Google Scholar] [CrossRef]
  75. Huang, K.W.; Wu, Z.X. CPO: A Crow Particle Optimization Algorithm. Int. J. Comput. Intell. Syst. 2019, 12. [Google Scholar] [CrossRef]
  76. Gaddala, K.; Raju, P. Merging Lion with Crow Search Algorithm for Optimal Location and Sizing of UPQC in Distribution Network. J. Control. Autom. Electr. Syst. 2020, 31, 377–392. [Google Scholar] [CrossRef]
  77. Alimi, A. Beta Neuro-Fuzzy Systems. Task Q. 2003, 7, 23–41. [Google Scholar]
  78. Rokbani, N.; Slim, M.; Alimi, A.M. The Beta distributed PSO, β-PSO, with application to Inverse Kinematics. In Proceedings of the National Computing Colleges Conference (NCCC), Taif, Saudi Arabia, 27–28 March, 2021; 2021; pp. 1–6. [Google Scholar] [CrossRef]
  79. Garzelli, A.; Capobianco, L.; Nencini, F. Fusion of multispectral and panchromatic images as an optimisation problem. In Image Fusion; Elsevier: Amsterdam, The Netherlands, 2008; pp. 223–250. [Google Scholar] [CrossRef]
  80. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  81. Biswas, S.; Das, S.; Suganthan, P.; Coello, C. Evolutionary multiobjective optimization in dynamic environments: A set of novel benchmark functions. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014, Beijing, China, 6–11 July 2014; pp. 3192–3199. [Google Scholar] [CrossRef]
  82. Durillo, J.; Nebro, A. JMetal: A Java framework for multi-objective optimization. Adv. Eng. Softw. 2011, 42, 760–771. [Google Scholar] [CrossRef]
  83. Genichi, T.; Rajesh, J.; Shin, T. Computer-based Robust Engineering: Essentials for DFSS; ASQ Quality Press: Milwaukee, WI, USA, 2004; p. 217. [Google Scholar]
  84. Rokbani, N. Bi-heuristic ant colony optimization-based approaches for traveling salesman problem. Soft Comput. 2021, 25, 3775–3794. [Google Scholar] [CrossRef]
  85. Dordevic, M. Statistical analysis of various hybridization of evolutionary algorithm for traveling salesman problem. In Proceedings of the IEEE International Conference on Industrial Technology, Melbourne, Australia, 13–15 February 2019; pp. 899–904. [Google Scholar] [CrossRef]
Figure 1. A flowchart of the proposed Distributed Bi-behaviours Crow Search Algorithm (DB-CSA).
Figure 1. A flowchart of the proposed Distributed Bi-behaviours Crow Search Algorithm (DB-CSA).
Applsci 12 09627 g001
Figure 2. A modified flowchart of the proposed Distributed Bi-behaviours Crow Search Algorithm (DB-CSA-II) with Dynamic Optimization Process.
Figure 2. A modified flowchart of the proposed Distributed Bi-behaviours Crow Search Algorithm (DB-CSA-II) with Dynamic Optimization Process.
Applsci 12 09627 g002
Figure 3. Beta profiles, respectively (a) Beta-1 and (b) Beta-2 distributions.
Figure 3. Beta profiles, respectively (a) Beta-1 and (b) Beta-2 distributions.
Applsci 12 09627 g003
Figure 4. Different Data Distributions of Beta Function with Different Configurations of p and q: (a) large Gaussian data distribution, (b) narrow Gaussian data distribution, (c) exponential decrease data distribution, (d) exponential increase data distribution.
Figure 4. Different Data Distributions of Beta Function with Different Configurations of p and q: (a) large Gaussian data distribution, (b) narrow Gaussian data distribution, (c) exponential decrease data distribution, (d) exponential increase data distribution.
Applsci 12 09627 g004
Figure 5. One-way ANOVA Box-plots of 7 MOEAs over MIGD of FDA, dMOP for (a) severe with ( τ t = 5, n t = 10), (b) moderate ( τ t = 10, n t = 10), and (c) slight with ( τ t = 20, n t = 10) environmental changes respectively.
Figure 5. One-way ANOVA Box-plots of 7 MOEAs over MIGD of FDA, dMOP for (a) severe with ( τ t = 5, n t = 10), (b) moderate ( τ t = 10, n t = 10), and (c) slight with ( τ t = 20, n t = 10) environmental changes respectively.
Applsci 12 09627 g005aApplsci 12 09627 g005b
Figure 6. One-way ANOVA Box-plots of 6 MOEAs over IGD of FDA, dMOP for (a) severe with ( τ t = 5, n t = 10), (b) moderate ( τ t = 10, n t = 10), and (c) slight ( τ t = 20, n t = 10) environmental changes, respectively.
Figure 6. One-way ANOVA Box-plots of 6 MOEAs over IGD of FDA, dMOP for (a) severe with ( τ t = 5, n t = 10), (b) moderate ( τ t = 10, n t = 10), and (c) slight ( τ t = 20, n t = 10) environmental changes, respectively.
Applsci 12 09627 g006
Figure 7. One-way ANOVA Box-plots of 6 MOEAs over HVD of FDA, dMOP for (a) severe with ( τ t = 5, n t = 10), (b) moderate ( τ t =10, n t = 10), and (c) slight ( τ t = 20, n t = 10) environmental changes, respectively.
Figure 7. One-way ANOVA Box-plots of 6 MOEAs over HVD of FDA, dMOP for (a) severe with ( τ t = 5, n t = 10), (b) moderate ( τ t =10, n t = 10), and (c) slight ( τ t = 20, n t = 10) environmental changes, respectively.
Applsci 12 09627 g007
Figure 8. One-way ANOVA Box-plots of 6 MOEAs over (a) IGD and (b) HVD of UDF, F function for moderate ( τ t = 10, n t = 10) environmental changes.
Figure 8. One-way ANOVA Box-plots of 6 MOEAs over (a) IGD and (b) HVD of UDF, F function for moderate ( τ t = 10, n t = 10) environmental changes.
Applsci 12 09627 g008
Figure 9. The plots of MIGD values for FDA, dMOP functions with (a) severe, (b) moderate and (c) slight environmental changes using the DB-CSA algorithm.
Figure 9. The plots of MIGD values for FDA, dMOP functions with (a) severe, (b) moderate and (c) slight environmental changes using the DB-CSA algorithm.
Applsci 12 09627 g009
Figure 10. The plots of IGD values for FDA, dMOP functions with (a) severe, (b) moderate and (c) slight environmental changes using the DB-CSA algorithm.
Figure 10. The plots of IGD values for FDA, dMOP functions with (a) severe, (b) moderate and (c) slight environmental changes using the DB-CSA algorithm.
Applsci 12 09627 g010
Figure 11. The plots of HVD values for FDA, dMOP functions with (a) severe, (b) moderate and (c) slight environmental changes using the DB-CSA algorithm.
Figure 11. The plots of HVD values for FDA, dMOP functions with (a) severe, (b) moderate and (c) slight environmental changes using the DB-CSA algorithm.
Applsci 12 09627 g011
Figure 12. The plots of POF for MaF1-7 functions with 7 objectives using the DB-CSA algorithm.
Figure 12. The plots of POF for MaF1-7 functions with 7 objectives using the DB-CSA algorithm.
Applsci 12 09627 g012
Figure 13. One-way ANOVA box-plots of 8 MOEAs over IGD for WFG functions with (a) 3, (b) 5 and (c) 15 objectives.
Figure 13. One-way ANOVA box-plots of 8 MOEAs over IGD for WFG functions with (a) 3, (b) 5 and (c) 15 objectives.
Applsci 12 09627 g013aApplsci 12 09627 g013b
Figure 14. The plots of POF for WFG1-9 functions with 10 objectives using DB-CSA algorithm.
Figure 14. The plots of POF for WFG1-9 functions with 10 objectives using DB-CSA algorithm.
Applsci 12 09627 g014
Figure 15. The plots of POF for DTLZ1-7 functions with 15 objectives using DB-CSA algorithm.
Figure 15. The plots of POF for DTLZ1-7 functions with 15 objectives using DB-CSA algorithm.
Applsci 12 09627 g015
Table 3. Parameters Settings and DMOEAs Solvers used for the comparative study (1).
Table 3. Parameters Settings and DMOEAs Solvers used for the comparative study (1).
ParametersComparative Study (1) for DMOPs
Comparable DMOEAsFive MOEAs [11]Six transfer learning-based methods [15]
DNSGA-II [1]
SGEA [11]
dCOEA [12]
PPS [14]
MOEA/D [13]
MMTL-MOEA/D [15]
RI-MOEA/D [15]
PPS-MOEA/D [15]
SVR-MOEA/D [16]
Tr-MOEA/D [17]
KF-MOEA/D [18]
Quality indicatorsIGD and HVDMIGD
Testbeds (DMOPs)FDA, dMOP, UDF, FFDA, dMOP
Number of Objectives (M)2 and 32 and 3
Population size100
Max-Iteration ( Max i t e r )3 × n t × τ t + 50
Independent runs30
Table 4. Parameters Settings and MaOEAs Solvers used for the comparative study (2).
Table 4. Parameters Settings and MaOEAs Solvers used for the comparative study (2).
ParametersComparative Study (2) for DMOPs
Comparable MaOEAsThirteen MOEAs [55]Seven MaOEAs [54]
MSOPS-II [21]
MOEA/D [13]
HypE [27]
picea-G [41]
SPEA/SDE [39]
GrEA [35]
NSGA-III [24]
KnEA [40]
RVEA [22]
Two_Arch2 [41]
θ -dea [36]
MOEA/DD ] [25]
AnD [55]
PMEA-MA [54]
PMEA*-MA [54]
SPEA2/SDE [39] NSGA-II/SDR [53]
MaOEA/IDG [31]
VaEA [52]
spea [56]
Quality indicatorsIGDIGD
BenchmarksWFG, MaFWFG, DTLZ
Number of Objectives (M)2, 3, and 73, 5, 8, 10, and 15
Population size10092, 224, 164, 280, and 152
Max-Iteration (Max i t e r )25.000300: WFG1-9 and DTLZ 2,4,5,6,
1000: DTLZ 1,3,6
Independent runs3130
Table 5. Properties of the tested benchmarks: DMOPs and MaOPs.
Table 5. Properties of the tested benchmarks: DMOPs and MaOPs.
DMOPs DMProperties
Dynamic
Multi
Objective
Optimization
Problems
(DMOPs)
FDA1202Type I, convex, POS: sinusoidal and vertical shift
FDA2152Type II, POF: convex to concave, dynamic density,
POS: sinusoidal and vertical shift
FDA3302Type II, POF: convex, dynamic spread, POS:
sinusoidal and vertical shift
FDA4123Type I, POF: concave, dynamic spread, POS:
sinusoidal and vertical shift
FDA5123Type II, POF: concave, dynamic spread, POS:
sinusoidal and vertical shift
dMOP1102Type III, POF: convex to concave, POS:
no change
dMOP2102Type II, POF: convex to concave, POS:
sinusoidal and vertical shift
dMOP3102Type I, POF: convex, dynamic spread, POS:
sinusoidal and vertical shift
F5, F6, F7202Type II, POF: convex to concave, POS:
F9, F10202Type II, POF: convex to concave, POS:
F8203trigonometric and vertical
UDF1102Type I, POF: linear continuous, POS:
trigonometric and vertical shift
UDF2102Type I, POF: linear continuous, POS:
polynomial and vertical shift
UDF3102Type III, POF: discontinuous, POS:
trigonometric and no variation
UDF4102Type II, convex to concave, POS:
trigonometric and horizontal shift
UDF5102Type II, convex to concave, POS:
polynomial + vertical shift
UDF6102Type III, discontinuous, POS:
trigonometric and no variation
UDF7103Type III, POF: 3D radius concave, POS:
trigonometric and no variation
Many
Objective
Optimization
Problems
(MaOPs)
MaF1Linear
MaF2112Concave
MaF3123Convex, multimodal
MaF4167Concave, multimodal
MaF5Convex, biased
MaF6Concave, degenerate
MaF7212Mixed
223Disconnected
267Multimodal
Many
Objective
Optimization
Problems
(MaOPs)
WFG1112Convex, unimodal
WFG2123Convex, disconnected
WFG3167Linear, unimodal
WFG4Concave, multimodal
WFG5123Concave, deceptive
WFG6145Concave, unimodal
WFG7178Concave, unimodal
WFG81910Concave, unimodal
WFG92415Concave, multimodal
DTLZ173Linear
95
128
1410
1915
DTLZ2123Concave
DTLZ3145Concave
DTLZ4178Concave
DTLZ51910Degenerate
DTLZ7223Disconnected
Table 6. Taguchi Array Design L 8 ( 2 6 ) of DB-CSA-II for DMOPs and of DB-CSA for MaOPs.
Table 6. Taguchi Array Design L 8 ( 2 6 ) of DB-CSA-II for DMOPs and of DB-CSA for MaOPs.
ProblemsID RunPopulation SizeMax-IterationParameters of Beta Function
p1p2q1q2
DMOPs11002505555
2100250550550
3200250505505
42002505050550
5100350505550
61003505050505
7200350550550
820035055055
MaOPs11001005555
21001005505050
310025.000505550
410025.000550550
5300100505505
63001005050550
730025.000555050
830025.00055055
Table 7. Best Experimental Design using Taguchi Method for MIGD, IGD, HVD Metric for DMOPs (FDA, dMOP) with Severe and Moderate Dynamic Changes and MaOPs with 7 Objectives (WFG).
Table 7. Best Experimental Design using Taguchi Method for MIGD, IGD, HVD Metric for DMOPs (FDA, dMOP) with Severe and Moderate Dynamic Changes and MaOPs with 7 Objectives (WFG).
ProblemsPopulation SizeMax-IterationParameters of Beta FunctionBest Mean Values
p1p2q1q2MIGDIGDHVD
DMOPs with Severe ChangeFDA1100250550550 6.37 × 10 7 5.73 × 10 4 2.22 × 10 2
FDA2100250550550 3.33 × 10 6 2.99 × 10 3 7.96 × 10 1
FDA3100250550550 2.63 × 10 4 2.36 × 10 1 4.89 × 10 1
FDA4100250550550 1.60 × 10 6 1.44 × 10 3 7.92 × 10 2
FDA5200250505505 3.14 × 10 6 2.83 × 10 3 1.48 × 10 1
dMOP1100250550550 7.28 × 10 7 6.55 × 10 4 4.14 × 10 3
dMOP22002505050550 1.26 × 10 6 1.13 × 10 3 3.80 × 10 2
dMOP32002505050550 5.21 × 10 5 4.69 × 10 2 4.23 × 10 + 0
DMOPs with Moderate changeFDA1100350550550 6.35 × 10 7 5.71 × 10 4 1.96 × 10 2
FDA2100350550550 4.19 × 10 6 3.77 × 10 3 4.33 × 10 + 0
FDA3100350550550 4.69 × 10 4 4.22 × 10 1 6.24 × 10 1
FDA4100350550550 1.43 × 10 6 1.29 × 10 3 3.67 × 10 2
FDA5100350550550 3.90 × 10 6 3.51 × 10 3 1.61 × 10 1
dMOP1100350550550 5.95 × 10 7 5.35 × 10 4 1.99 × 10 2
dMOP2100350505550 1.02 × 10 6 9.18 × 10 4 3.13 × 10 2
dMOP3200350550550 3.36 × 10 5 3.02 × 10 2 1.68 × 10 1
MaOPs with 7 objectivesWFG110025.000550550- 2.42 × 10 4 -
WFG210025.000505550- 2.53 × 10 4 -
WFG310025.000550550- 5.87 × 10 5 -
WFG410025.000505550- 4.09 × 10 4 -
WFG510025.000505550- 1.54 × 10 4 -
WFG610025.000505550- 3.57 × 10 4 -
WFG710025.000505550- 3.87 × 10 4 -
WFG810025.000550550- 1.91 × 10 3 -
WFG910025.000550550- 7.93 × 10 4 -
Table 8. Non-parametric statistical analysis based on a Wilcoxon signed rank test of DB-CSA-II vs. six peer transfer-learning based approaches over MIGD metric for FDA and dMOP functions.
Table 8. Non-parametric statistical analysis based on a Wilcoxon signed rank test of DB-CSA-II vs. six peer transfer-learning based approaches over MIGD metric for FDA and dMOP functions.
DB-CSA-II (with Dynamic Process) vs.QIProb.R−R+p-ValueBest Method
MMTL-MOEA/DMIGDFDA & dMOP30000.000018DB-CSA-II
KF-MOEA/D30000.000018
PPS-MOEA/D30000.000018
SVR-MOEA/D30000.000018
Tr-MOEA/D30000.000018
RI-MOEA/D30000.000018
CSA30000.000018
DB-CSA275250.000355
Table 9. Non-parametric statistical analysis based on a Wilcoxon signed rank test of DB-CSA-II vs. five peer MOEAs over IGD, HVD metrics for FDA, dMOP, UDF and F functions.
Table 9. Non-parametric statistical analysis based on a Wilcoxon signed rank test of DB-CSA-II vs. five peer MOEAs over IGD, HVD metrics for FDA, dMOP, UDF and F functions.
DB-CSA-II (with Dynamic Process) vs.Prob.QIR−R+p-ValueBest Method
DNSGA-IIFDA
&
dMOP
IGD235650.015158DB-CSA-II
dCOEA213870.071861
PPS241590.009322DB-CSA-II
MOEA/D231690.020652DB-CSA-II
SGEA204960.122865
CSA30000.000018DB-CSA-II
DB-CSA275250.000355DB-CSA-II
DNSGA-IIFDA
&
dMOP
HVD222780.039672DB-CSA-II
dCOEA206940.109599
PPS219810.048675DB-CSA-II
MOEA/D223770.037005DB-CSA-II
SGEA205950.116083
CSA289110.000071DB-CSA-II
DB-CSA240600.010128DB-CSA-II
DNSGA-IIUDF
&
F
IGD9100.001474DB-CSA-II
dCOEA9000.001871DB-CSA-II
PPS8650.004649DB-CSA-II
MOEA/D9100.001474DB-CSA-II
SGEA8560.005772DB-CSA-II
CSA9010.001871DB-CSA-II
DB-CSA9100.001474DB-CSA-II
DNSGA-IIUDF
&
F
HVD36550.506746
dCOEA34570.421579
PPS35560.463071
MOEA/D36550.506746
SGEA35560.463071
CSA69220.100525
DB-CSA71200.074735
Table 10. Non-parametric statistical analysis based on Wilcoxon signed rank test of DB-CSA vs. thirteen peer MAOEAs over the IGD metric for WFG, MaF and DTLZ functions.
Table 10. Non-parametric statistical analysis based on Wilcoxon signed rank test of DB-CSA vs. thirteen peer MAOEAs over the IGD metric for WFG, MaF and DTLZ functions.
DB-CSA vs.QIProb.R−R+p-ValueBest Method
MSOPS-IIIGDWFG37800.000006DB-CSA
MOEA/D37800.000006
HypE37440.000009
PICEA-g37800.000006
SPEA/SDE37620.000007
GrEA37800.000006
NSGA-III37350.000010
KnEA37800.000006
RVEA37800.000006
Two_Arch237440.000009
θ -DEA376.51.50.000007
MOEA/DD37710.000006
AnD37800.000006
CSA37800.000006
MSOPS-IIIGDMaF23100.000060DB-CSA
PMEA*-MA103500.000006
MOEA/D23100.000060
HypE23100.000060
PICEA-g23100.000060
SPEA/SDE23100.000060
GrEA23100.000060
NSGA-III23100.000060
KnEA23100.000060
RVEA23100.000060
Two_Arch223100.000060
θ -DEA23100.000060
MOEA/DD23100.000060
AnD23100.000060
CSA23100.000060
PMEA-MAIGDWFG10350 5.179 × 10 9 DB-CSA
PMEA*-MA10350 5.179 × 10 9
SPEA2/SDE103505. 179 × 10 9
NSGA-II/SDR10350 5.179 × 10 9
MaOEA/IGD10350 5.179 × 10 9
VaEA10350 5.179 × 10 9
SPEA10350 5.179 × 10 9
CSA10350 5.179 × 10 9
PMEA-MAIGDDTLZ6291 2.70 × 10 7 DB-CSA
PMEA*-MA10350 2.70 × 10 7
PMEA*-MA6291 2.70 × 10 7
SPEA2/SDE6300 2.48 × 10 7
NSGA-II/SDR6300 2.48 × 10 7
MaOEA/IGD6300 2.48 × 10 7
VaEA6291 2.70 × 10 7
SPEA6300 2.48 × 10 7
Table 11. Run-times of DMOEAs for Solving FDA and dMOP Test Suites (Unit: Seconds).
Table 11. Run-times of DMOEAs for Solving FDA and dMOP Test Suites (Unit: Seconds).
DMOPsMMTL-MOEA/DKF-MOEA/DPPS-MOEA/DSVR-MOEA/DTr-MOEA/DDB-CSA-II
FDA15.833.553.114.8542.5410.32
FDA25.403.463.084.6345.2712.86
FDA35.093.923.644.7857.7114.96
FDA410.069.7813.329.66132.5921.92
FDA59.9410.2414.8311.52115.5219.85
dMOP17.057.258.968.3480.239.05
dMOP27.744.024.934.4773.539.29
dMOP35.633.524.154.6175.559.73
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aboud, A.; Rokbani, N.; Neji, B.; Al Barakeh, Z.; Mirjalili, S.; Alimi, A.M. A Distributed Bi-Behaviors Crow Search Algorithm for Dynamic Multi-Objective Optimization and Many-Objective Optimization Problems. Appl. Sci. 2022, 12, 9627. https://doi.org/10.3390/app12199627

AMA Style

Aboud A, Rokbani N, Neji B, Al Barakeh Z, Mirjalili S, Alimi AM. A Distributed Bi-Behaviors Crow Search Algorithm for Dynamic Multi-Objective Optimization and Many-Objective Optimization Problems. Applied Sciences. 2022; 12(19):9627. https://doi.org/10.3390/app12199627

Chicago/Turabian Style

Aboud, Ahlem, Nizar Rokbani, Bilel Neji, Zaher Al Barakeh, Seyedali Mirjalili, and Adel M. Alimi. 2022. "A Distributed Bi-Behaviors Crow Search Algorithm for Dynamic Multi-Objective Optimization and Many-Objective Optimization Problems" Applied Sciences 12, no. 19: 9627. https://doi.org/10.3390/app12199627

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop