Next Article in Journal
Experimental Study on the Permeability of Rare Earths with Different Particle Composition for a Novel Heap Leaching Technology
Next Article in Special Issue
Optimization on Linkage System for Vehicle Wipers by the Method of Differential Evolution
Previous Article in Journal
A Garlic-Price-Prediction Approach Based on Combined LSTM and GARCH-Family Model
Previous Article in Special Issue
Hybrid Discrete Particle Swarm Optimization Algorithm with Genetic Operators for Target Coverage Problem in Directional Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid of Fully Informed Particle Swarm and Self-Adaptive Differential Evolution for Global Optimization

1
Faculty of Art, Computing and Creative Industry, Universiti Pendidikan Sultan Idris (UPSI), Tanjong Malim 35900, Perak, Malaysia
2
Centre of Global Sustainability Studies (CGSS), Level 5, Hamzah Sendut Library, Universiti Sains Malaysia, Minden 11800, Pulau Pinang, Malaysia
3
School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Pulau Pinang, Malaysia
4
School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Pulau Pinang, Malaysia
5
Faculty of Engineering and Computing, First City University College, Bandar Utama, Petaling Jaya 47800 Selangor, Malaysia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(22), 11367; https://doi.org/10.3390/app122211367
Submission received: 12 September 2022 / Revised: 2 November 2022 / Accepted: 5 November 2022 / Published: 9 November 2022
(This article belongs to the Special Issue Evolutionary Computation: Theories, Techniques, and Applications)

Abstract

:
Evolutionary computation algorithms (EC) and swarm intelligence have been widely used to solve global optimization problems. The optimal solution for an optimization problem is called by different terms in EC and swarm intelligence. It is called individual in EC and particle in swarm intelligence. Self-adaptive differential evolution (SaDE) is one of the promising variants of EC for solving global optimization problems. Adapting self-manipulating parameter values into SaDE can overcome the burden of choosing suitable parameter values to create the next best generation’s individuals to achieve optimal convergence. In this paper, a fully informed particle swarm (FIPS) is hybridized with SaDE to enhance SaDE’s exploitation capability while maintaining its exploration power so that it is not trapped in stagnation. The proposed hybrid is called FIPSaDE. FIPS, a variant of particle swarm optimization (PSO), aims to help solutions jump out of stagnation by gathering knowledge about its neighborhood’s solutions. Each solution in the FIPS swarm is influenced by a group of solutions in its neighborhood, rather than by the best position it has visited. Indirectly, FIPS increases the diversity of the swarm. The proposed algorithm is tested on benchmark test functions from “CEC 2005 Special Session on Real-Parameter Optimization” with various properties. Experimental results show that the FIPSaDE is more effective and reasonably competent than its standalone variants, FIPS and SaDE, in solving the test functions, considering the solutions’ quality.

1. Introduction

The real-life optimization problems are often unpredictable and dynamic, which means that the optimization problems are facing many time-varying and unimaginable constraints, such as routing problems, scheduling problems, prediction and more variants of broad application problems. The primary purpose of the optimization problems is to find the most optimum minimum or maximum value in the search space where often the problems have more than one maximum or minimum point. This type of problem is considered an NP-hard (nondeterministic polynomial time) problem, requiring high computational processing to solve it. Currently, evolutionary computation (EC) is widely used in numerous studies to solve this type of problem. The EC technique has proven to provide an effective search in a complex search to achieve optimal or near-optimal solutions and has been proven to have strong global search ability [1].
As a classic heuristic method, Particle Swarm Optimization (PSO) is one of the most used and reliable swarm intelligence techniques [2]. It has been successfully adopted into many practical applications due to its efficiency, fast optimization speed and simplicity of implementation. In the traditional PSO, each particle of the population is learning from its nearest neighbors, and this may cause the particle swarm to stagnate in the local region due to rapid convergence [1]. Unlike the traditional PSO, one of its variants, the Fully Informed Particle Swarm Optimization, FIPS, introduced by Mendes, Kennedy and Neves in 2004, has the weight contributions of all particles to have the same value. The particles are influenced by their whole neighborhood in a specific way according to different neighborhood topologies. This technique enables each particle to access the most successful solutions from the whole swarm, not necessarily the best from its nearest neighbor. Accordingly, the performance of FIPS algorithm types is also generally more dependent on the neighborhood topology [3].
Unlike PSO and its variants, Differential Evolution (DE) is also a widely used algorithm with a remarkable ability to find the optimal solution. DE relies upon its strength in handling starting initial points where multiple starting points are randomly chosen during sampling of potential solutions [4,5]. Several versions of the proposed adaptive DE focus on manipulating the parameter of DE with the introduction of a new way of controlling the value of existing parameters. The adaptive differential evolution (jDE) [6] implemented several DE strategies to control the diversity of the population. Additionally, it can self-adapt the scaling factor F and the crossover rate C r . The parameter adaptive differential evolution (JADE) [7] relies on greedy mutation strategy (DE/current-to- p best ) with optimal external achieve and utilizing the previously explored inferior solutions. In SaDE [8], suitable learning strategies and parameter settings are gradually self-adapted according to the previous learning experiences. Even more, the choice of learning strategy and the two essential control parameters of DE, F and C r , are not required to be specified before the evolution phase.
In the past few years, the development of adaptive mechanisms has emerged to be one of the critical issues in the branch of the EC algorithm. Adaptive mechanism refers to the ability of the algorithm to change its behavior according to information available during its running phase. The number of works in which adaptive mechanisms are being successfully used has increased enormously over the past few decades. The applications include a wide variety of areas, such as routing problems, signal processing, optimization problem and medical fields. Furthermore, the efficiency of the adaptive mechanism mainly depends on the algorithm design and the algorithm used for adaptation.
One of the common drawbacks EC algorithms face in solving optimization problems is the lack of diversity in solutions causing a suboptimal solution. A potential approach to overcoming this drawback is a hybrid of EC and swarm intelligence. FIPS and SaDE emerge as promising EC and swarm intelligence algorithms in solving optimization problems. Therefore, they are chosen to form a hybrid in our study. FIPS and SaDE are hybridized owing to the solutions’ diversity in FIPS and the optimal solution quality in SaDE. FIPS improves the solutions’ diversity by gathering knowledge about its neighborhood’s solutions. Each solution in FIPS receives information from all its neighbors rather than just the best one. SaDE is a simple, powerful and self-adaptive type of DE which finds an optimal solution through exploration and exploitation. SaDE minimally relies on user-specified parameters because it can self-adapt its parameters to solve optimization problems. FIPS improves the solutions through information gathering from its neighbors and maneuvering the group of solutions to move toward the optimal region, which is explored and exploited by SaDE. Therefore, in the hybrid of FIPS and SaDE, they complement each other by balancing the exploration of neighbors and exploitation of the optimal solutions.

2. Related Works

2.1. Improved Differential Evolution

DE is one of the most popular evolutionary algorithms used to solve optimization problems because it is simple, robust and computationally efficient. However, DE faces issues in convergence rate and local exploitation rate [9]. Therefore, various research efforts for its improvement are continuously carried out even though the algorithm has been introduced 25 years ago by Storn and Price [10]. Besides, surveys related DE variants are updated after a certain years to accommodate various modifications of DEs, as shown in [5,9,11,12,13,14,15]. The surveys show that most of the research efforts focus on improving the performance of DE through parameter settings and modifications in genetic operations consisting of initialization, differential mutation, crossover and selection. Another potential direction towards improving DE is the use of hybridization, and it has gained research attention [9,11,12].
Most of the optimization algorithms are not able to solve a variety of problems [9]. However, continuous research efforts are required to improve the algorithm through the complementary of their advantages. Hybridization is one the main research directions in improving the performance of DE because different optimization algorithms have different search behaviours and advantages [16].
It is implemented to enhance its performance and overcome its limitations such as convergence speed [11], premature convergence [9] and local minima [9]. The types of algorithms used for hybridization in DE can either belong to the same or different categories of algorithms. The work in [12] categorizes the algorithm in hybridization of DE into statistical techniques and algorithms. On the other hand, the work in [11] shows that most of the algorithms in a hybridization of DE are from the swarm intelligence algorithms. The commonly used swarm intelligence algorithms to hybridize with DE include PSO, ant colony algorithm (ACA) and artificial bee colony (ABC).
DE variants produce robust solutions owing to their exploration ability. However, they have issues related to premature convergence and local exploitation. On the other hand, one of the drawbacks of PSO is premature convergence  [17]. Premature convergence is caused by improper velocity adjustment when PSO is configured by inappropriate acceleration coefficient and inertia weights [18]. Consequently, the particles move in undesired directions, causing stagnation around or being trapped in suboptimal [18]. A review of the modifications, extensions and hybridization of the PSO algorithm and their applications is presented in [19].
Most of the hybrid variants of DE is formed with PSO [11] and an extensive survey about the hybrid can be found in [9]. A hybrid of advanced DE (ADE) and (PSO) (APSO), namely, AHDEPSO, was proposed to solve unconstrained optimization problems in [9]. Given that a hybrid of DE and PSO offers complementary properties and has the effect of balancing the exploration and exploitation phases, the hybrid of their advanced variants has attracted more research attention. The research focusing on how to combine PSO and DE is still an open problem [16].
The premature convergence of PSO in the optimization process can be improved based on four different methods: parameter settings, neighborhood topology, learning strategy and hybridization [20]. Neighbourhood topology enhances PSO’s exploration capability and different topologies affected solve different optimization problems [20]. On the other hand, hybridization is used to complement the weakness of intelligent algorithms by combining their helpful features. Therefore, FIPS, instead of the original PSO, is hybridized with DE to improve the performance of the optimization algorithms.
The self-adaptive mutation differential evolution algorithm based on particle swarm optimization (DEPSO) [21] uses balance to improve the optimization. The hybrid uses the selection probability of mutation strategy to decide between the modified DE/rand/1 and the PSO’s mutation strategy. The modified mutation strategy with the elite archive strategy, called DE/e-rand/1, is proposed in DEPSO. The framework of DEPSO is still based on DE. Unlike DEPSO, our proposed FIPSaDE is based on the framework of PSO. The hybridization of DEPSO focuses on selecting mutation strategies from DE and PSO to generate the mutant vector. Therefore, its hybridization involves the mutation phase only. On the other hand, the hybridization in FIPSaDE involves integrating DE’s algorithmic operations, from the mutation to selection phases, into the framework of PSO before the velocity and position updating.
Dash et al. proposed HDEPSO [22] to solve various benchmark functions and an optimization problem focusing on the effectiveness of the sharp edge FIR filter (SEFIRF). The proposed framework is similar to FIPSaDE, whereby, DE’s mutation, crossover and selection are integrated with the best particles of PSO to enhance global searching ability. However, the settings of F and C r were not adaptive. F is fixed as a constant and C r refers to an arbitrary number between [0, 1]. In contrast, the settings of F and C r in FIPSaDE are adaptive.
For the case of the hybrid approach, it may cause more uncertainties in parameter setting in view of the fact that a user needs to realize how to set parameters for at least two algorithms [23]. The task of parameter setting in solving optimization problem is problem-dependent, user-dependent and algorithm-dependent. The task becomes more complex when the algorithms of the hybrid are from different branches of machine learning. Therefore, researchers start to focus on the possibility of applying adaptation methods for users that have minimum knowledge about the algorithms, parameters settings and the problems. DE variants with varying adaptation levels have shown promising results in solving optimization problems and its popularity in setting DE’s parameters can be found in the aforementioned reviews.
For example, the work in [24] showed the use of adaptation to tune the configurations of F, C r , and mutation strategy at different stages of the evolution. Another work demonstrating the use of adaptation in controlling parameters F, C r is the Success-History-based adaptive DE (SHADE) algorithm [25]. SHADE uses the nearest spatial neighborhood-based modification to the adaptation process of the parameters described above. Do et al. [26] also proposed an adaptive mechanism to determine the parameters F and F, C r with the mutation and selection processes determined by the best individual-based mutation and elitist selection techniques. The adaptation in population sizing is summarized in the review by Piotrowski [27]. These research studies have shown the use of adaptation to control DE’s various parameters.
The adaptability trend to set DE’s parameters, from partial adaptiveness to full adaptiveness, is increasing. Since the adaptive DE variants have shown promising results, its use to form the hybrid can reduce the complexity of parameter setting because the number of algorithms that require user-specified parameters has declined. Therefore, we use a adaptive DE called SaDE in [8] to form the hybrid of DE and PSO in the current work. SaDE adapts the control parameter F and C r based on the mutation strategy D E / r a n d / 1 / b i n s t r a t e g y .

2.2. Particle Swarm Optimization

PSO is another branch of EC that operates based on swarm-based intelligence. Its implementation is simple and easy, but it also suffers from premature convergence [18,28]. Therefore, various modifications are applied to the standard PSO to overcome its drawback. In [18], the researchers categorize the modification of PSO into four strategies: Modification of the PSO’s controlling parameters (1); (2) Hybridization with other meta-heuristic algorithms such as GA and DE; (3) Cooperation; (4) Multi-swarm techniques. The work in  [9,18] reflects that a hybrid of PSO and DE has gained popularity in recent years as the strategy to improve EC’s performance in solving optimization problems. The performance of PSO is affected by its neighbourhood topology, where the particles within the neighbour topology communicate with each other and share information. FIPS is the PSO that relies on the particle’s best positions of its neighbors in updating its velocity to prevent it from being stuck at local optimum. The hybrid variants of DE and PSO have shown better quality of solutions and computation efficiency than the original PSO [18]. Therefore, a hybrid of the adaptive SaDE and FIPS is formed to complement their weakness in solving optimization problems in our current work. Brief descriptions of SaDE and FIPS are provided in Section 2.3 and Section 2.4.

2.3. Self-Adaptive Differential Evolution (SaDE)

SaDE is a variant of DE that can produce better results than the traditional DE algorithms [2]. This algorithm has been applied in various optimization problems [2,3,4,6,7]. In SaDE, two out of three critical control parameters of DE (i.e., F and C r ) are manipulated for DE improvement. Since these parameters are used as the external fixed parameters, these control parameters are not deemed to be evolving entities in the earliest EA algorithm. Later on, it was determined that certain parameters could be changed during the evolution process in order to reach the desired level of convergence [8].
The population size N p is not favored as a chosen parameter because its value is not sensitive to the efficiency and robustness of the DE algorithm [8]. N p is held as a user-specified parameter, whereas the F value is set to be between (0, 2] for different individuals, with a normal distribution of mean 0.5 and standard deviation of 0.3 [29]. Instead of the commonly used (0, 1] for F values, the range between (0, 2] was chosen to maintain both small F values for local search and large F values for global search. C r would initially be naturally distributed in a range with a mean of C r m and a standard deviation, s t d , of 0.1. C r m value is set to 0.5 as a starting point where the values will be held for several generations before being replaced by a new value with a similar normal distribution for the next generation.
Throughout each generation, better C r values associated with trial vectors that are able to reach the next generation will be registered. The mean value for the normal distribution of C r is recalculated based on all observed values corresponding to the effective trial vectors during the cycle. In order to prevent potentially inappropriate long-term accumulated results, successful C r values captured during the recalculation of the normal distribution mean are not saved.

2.4. Fully Informed Particle Swarm (FIPS)

The FIPS algorithm that was developed by Mendes et al. in 2004 [1] is a variant of PSO in which each particle is influenced by all of its K-neighbors. This idea is in contrast to the standard PSO algorithm, in which the particle is drawn to the best location it has visited as well as the best position found by the particle in its neighborhood that manages to produce the best result. The algorithm is motivated by how individuals in human society are influenced by a statistical summary of their world [30].
Each particle in the FIPS swarm is affected by a group of particles from its surrounding neighborhood, which is not necessarily to the best particle’s location [1]. The velocity equation for FIPS is modified based on canonical PSO. Each particle inside the swarm is affected by the achievements of all its neighbors, rather than pointing to just one best particle performance from its neighbors [30]. FIPS’s velocity and position updates are defined by Equations (1) and (2), respectively.
v i ( t + 1 ) = w v i + m = 1 | N i | γ m ( t ) y m ( t ) x i ( t ) | N i |
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 )
where N i refers to the set of particles in particle i’s neighborhood. γ m ( t ) is in the range of U ( 0 , c 1 + c 2 ) D and D is dimension of the problem. y m ( t ) refer to the best position previously visited by m. Coefficients c 1 , c 2 and w represent cognitive, social and inertia weight, respectively. There are five variants of the algorithm, as stated by Mendes et al. [1], which are FIPS, wFIPS, wdFIPS, Self and wSelf:
  • FIPS refers to a fully informed particle swarm optimization algorithm with w returning a constant value and the number of neighbourhood contributions being the same;
  • wFIPS refers to a type of FIPS algorithm in which each neighbor’s input is weighted by the goodness of its previous best particle;
  • wdFIPS refers to a type of FIPS algorithm where the contribution of each neighbor is weighted by its distance in the search space from the target particle;
  • Self refers to a FIPS algorithm in which the previous best particle received half the weight;
  • wSelf refers to a FIPS algorithm where the previous best particle received half the weight and the contribution of each neighbor was weighted by the goodness of its previous best particle.
The swarm optimization algorithm is a type of algorithm that is greatly influenced by the neighborhood. As a result, the efficiency of the FIPS algorithm is generally even more dependent on its neighborhood topology. Ten topology types selected by Cleghorn and Engelbrecht [30] to be implemented on FIPS include All, Ring, Four Clusters, Pyramid, Square, UAll, UFour Clusters, UPyramid and USquare. Topology with the “U” prefix refers to a similar topology without the prefix, but with the particle’s own index omitted from the neighborhood.

3. Methodology

FIPS and SaDE are hybridized owing to the solutions’ diversity in FIPS and the quality of optimal solution in SaDE. One of the common drawbacks of using optimization algorithms is the lack of diversity leading to a suboptimal solution [31]. The use of FIPS could improve the drawback. In the FIPS, the neighbors around a solution become the source of influence to improve the solution’s diversity. DE is remarkable in finding the optimal solution in a group of solutions. However, choosing suitable control parameter values is tricky as the values need to vary according to each problem. SaDE has the advantage that the user does not need to determine the optimal parameters settings, and time complexity does not increase as the rules for applying SaDE are simple [29]. SaDE performs well in finding the optimal solution in a group of solutions through self-adapting its parameters. Each particle in FIPS receives information from all of its neighbors rather than just the best one. The particle may sometimes trap in local optima, and the FIPS topology network may help to monitor and maneuver the particle’s position. Therefore, the operation in FIPS improves the solutions through the exploration of its neighbors and guides the group of solutions moving toward the optimal region.
A hybridization between SaDE and FIPS, called FIPSaDE, could improve the performance in solving complex optimization problems. The main process of FIPSaDE is structured according to the usual structure of the PSO algorithm. The solution found in FIPSaDE is called individual or solution interchangeably. The FIPS parameter is updated each time before the mutation process of the SaDE algorithm and again during the selection process in controlling the quality of individuals chosen for the next generation. The process is repeated iteratively until the algorithm reaches the optimum value. The presence of the FIPS process creates a disturbance in the population in which the FIPS is focused on moving the solution to explore its surrounding. This helps in maintaining the diversity of the population created and producing an excellent optimal solution. The pseudocode of FIPSaDE is given in Algorithm 1.
Algorithm 1: The Pseudocode of the FIPSaDE for Solving a Minimization Problem
******************************Start of FIPS******************************
Initialization
Define the swarm size N p
for each individual i  ϵ [ 1 N p ] do
 Randomly generate x i and v i .
 Evaluate the fitness of x i denoting it as f ( x i ) .
 Set P b e s t i = x i and f ( P b e s t i ) = f ( x i ) .
end for
Set G b e s t = P b e s t 1 and f ( G b e s t ) = f ( P b e s t 1 ) .
for each individual i  ϵ [ 1 N p ] do
if f ( P b e s t i ) < f ( G b e s t ) then
   f ( G b e s t ) = f ( P b e s t i )
end if
end for
whilet < maximum iterations do
for each individual i  ϵ [ 1 N p ] do
  -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.Start of SaDE-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-..-.-.-.-
  Generate vector [ x i , F i , C r i ] .
  Update the parameters F i and C r i based on Equations (3) and (4).
  Mutate x i based on Equation (5).
  Crossover x i based on Equation (6).
  Select x i based on Equation (7).
  -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.End of SaDE-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
  Evaluate its velocity v i ( t + 1 ) based on Equation (1).
  Update the position x i ( t + 1 ) of the particle based on Equation (2).
  if f ( x i ( t + 1 ) ) < f ( P b e s t i ) then
    P b e s t i = x i ( t + 1 )
    f ( P b e s t i ) = f ( x i ( t + 1 ) )
  end if
  if f ( P b e s t i ) < f ( G b e s t ) then
    G b e s t = P b e s t i
    f ( G b e s t ) = f ( P b e s t i )
  end if
end for
t = t + 1
end while
return G b e s t
*******************************End of FIPS******************************
In FIPSaDE, a swarm of individuals flies in a D-dimensional search space seek an optimal solution. Each individual i possesses a current velocity vector v i = [ v i 1 , v i 2 , , v i D ] and a current position vector x i = [ x i 1 , x i 2 , , x i D ] , where D is the number of dimensions. The FIPSaDE process starts by randomly initializing v i and x i .
In each iteration t, FIPSaDE uses a self-adapting mechanism of two control parameters, i.e., F and C r . Each individual i is extended with control parameter values F and C r as a vector [ x i , F i , C r i ] . New control parameters F i , ( t + 1 ) and C r i , ( t + 1 ) are calculated before the mutation operator based on Equations (3) and (4). Therefore, the parameters influence the mutation, crossover and selection operations of the new vector x i , ( t + 1 ) .
F i , ( t + 1 ) = F l + r a n d 1 F u if r a n d 2 < τ 1 F i , t otherwise
C r i , ( t + 1 ) = r a n d 3 if r a n d 4 < τ 2 C r i , t otherwise
where r a n d j , for j ϵ { 1 , 2 , 3 , 4 } are uniform random values within the range [0, 1]. The parameters τ 1 , τ 2 , F l , F u are fixed to values 0.1, 0.1, 0.1, 0.9, respectively in [8].
In the mutation process, for each target vector x i , a mutant vector q i is generated according to Equation (5)
q i , ( t + 1 ) = x r 1 , t + F i , t ( x r 2 , t x r 3 , t )
with randomly chosen indexes r 1 , r 2 , r 3 ϵ [ 1 , N p ] .
Then, a crossover operator forms a trial vector p i = ( p i 1 , p i 2 , p i D ) , with the target vector is mixed with the mutated vector using Equation (6).
p i j , ( t + 1 ) = q i j , ( t + 1 ) if ( r a n d j ( 0 , 1 ) C r i , t ) or ( j = j r a n d ) x i j , t otherwise
for i = 1 , 2 , N p and j = 1 , 2 , D . Index j r a n d ϵ { 1 , N p } is a randomly chosen integer that is responsible for the trial vector containing at least one component from the mutant vector.
In the selection process, a greedy selection scheme for a minimization is used and it is shown in Equation (7).
x i , ( t + 1 ) = p i , ( t + 1 ) if f ( p i , ( t + 1 ) ) f ( x i , t ) x i , t otherwise
In each iteration t, the best position that has been found by individual i, Pbest i = [ P b e s t i 1 , P b e s t i 2 , , P b e s t i D ] and the best position that has been found by the whole swarm Gbest = [ G b e s t 1 , G b e s t 2 , , G b e s t D ] guide individual i to update its velocity and position by Equations (1) and (2).
The FIPSaDE is effective in traversing through swarm spaces while avoiding getting stuck inside local search using the information collected from individuals’ neighborhood. The individuals of each population will contain a control parameter adapted from SaDE and also a traversing parameter from FIPS in order to exploit the swarm space that is searching for the best global point, while avoiding getting trapped inside a local point. If some population suffers from slow and premature convergence or if the population becomes stagnant, the velocity parameter and position parameter from each individual will be able to help the population to evolve out into a better point. Experimental results validated the effectiveness and strength of our proposed model.

4. Experimental Setup

The performance of FIPSaDE is evaluated against the 25 benchmark functions from “CEC 2005 Special Session on Real-Parameter Optimization” (CEC 2005) [32], namely, F1–F25. The benchmark functions consist of 5 unimodal functions, 7 basic multimodal functions, 2 expanded multimodal functions and 11 composition functions. The details of the functions are described in Table 1. The number of variables for each function is represented by D and the ranges of variable search are represented by S. All functions are a minimization of problems with their best minimum solutions can refer to [32]. All experiments were conducted in the Eclipse software by using the Java programming language and on a laptop featuring an Intel(R) Core (TM) i5-7200U CPU @ 2.50 GHz processor with 4 GB of RAM.
In this experiment, the performance of FIPSaDE is studied by comparing the algorithm with four known algorithms, which are FIPS by Mendes et al. [1], DE by Storn and Price [10], SaDE by Brest et al. [8] and DE-PSO by Pant et al. [31]. FIPS, DE and SaDE were selected as these algorithms can be considered as the parent or the ancestor of FIPSaDE. As for DE-PSO, it was selected as the algorithm is a hybrid algorithm that features a similar structure and concept as FIPSaDE. The parameters for the algorithms are shown in Table 2 and Table 3. The optimization test for all algorithms is restricted to a maximum number of function evaluation ( MAX _ FEs ), which are 5 × 10 4 for both 10 D and 50 D problem, and 3 × 10 4 for 30 D problem. A success-threshold of 10 14 was administered for the experiments. This means the evolutionary processes are terminated if the best-fitness, f b e s t < f t a r g e t is reached, with f t a r g e t = f ( x ) + 10 14 . Otherwise, the processes continue until they reach MAX _ FEs . The evolutions for all models of DEs are run until they meet the stopping criterion and repeated by using the k-th seed numbers, k = 1 , 2 , , K , with K referring to the maximum number of runs.

5. Results and Analysis

The difference between current fitness value and optimum value, also known as error value, is used to compare the algorithms’ performance. The average errors of the independent runs of all algorithms for 10 D , 30 D and 50 D problems are summarized in Table 4, Table 5 and Table 6. The last rows in Table 4, Table 5 and Table 6 show the frequency of having the best result for the algorithms, h w i n . Based on the results in Table 4, Table 5 and Table 6, DE cannot find the solutions for functions F15–F17 because its solutions have undefined numeric results for the functions that are divided by zero.
When the settings are 10D and N is 25 (10D25N), SaDE, DE-PSO and FIPSaDE have the highest h w i n , which is 6. This means the three algorithms have compatible performances in solving the problems. An interesting finding from the comparison of h w i n for SaDE, DE-PSO and FIPSaDE for the setting of 10D25N is that they have the lowest error for different functions. This finding may indicate that they perform well for different characteristics of problems. As the combinations of D and N increase to 30D30N and 50D50N, FIPSaDE shows distinctive performances compared to the other algorithms by having the highest h w i n in both settings. FIPSaDE has h w i n = 9 and h w i n = 12 in 30D30N and 50D50N, respectively. In other words, FIPSaDE’s performances improves with the increase of problem dimensionality and population size. In contrast, the performances of FIPS are the worst among the algorithms for the three settings. FIPS, DE and SaDE show deterioration when the settings increase from 10D25N to 50D50N, with the highest deterioration being observed in FIPS. DE-PSO, which has similar characteristics to FIPSaDE, shows moderate performances regardless of the settings.
FIPSaDE with its original variants, that is, FIPS and SaDE are further analyzed based on the best, the mean and standard deviation of f b e s t for different configurations of problem dimensionality and population size for all functions. The f b e s t obtained by FIPSaDE, FIPS and SaDE for all functions are shown in Table 7, Table 8 and Table 9, summarizing the best, the mean and standard deviation for f b e s t over a number of runs, which can be either 20 runs or 30 runs. The best, mean and standard deviation results of f b e s t are denoted as best - f b e s t , mean - f b e s t and std - f b e s t , respectively. For each table, the algorithm producing the best results for the same function is bolded and its frequency, h w i n is calculated at the bottom.
For the setting of 10D25N, FIPSaDE has the highest h w i n for best - f b e s t , mean - f b e s t and std - f b e s t . FIPS and SaDE have similar h w i n values best - f b e s t , mean - f b e s t for the same setting. When the setting increases to 30D30N and 50D50N, FIPSaDE is still associated with the highest h w i n for best - f b e s t , mean - f b e s t and std - f b e s t . Therefore, FIPSaDE shows better results for best - f b e s t , mean - f b e s t and std - f b e s t with the increase of problem dimensionality and population size.
FIPS’s h w i n values for best - f b e s t , mean - f b e s t deteriorate from 10D25N to 50D50N. Based on h w i n for std - f b e s t , FIPS has consistent values that are 6 to 7, regardless of the problem dimensionality and population size. The finding indicates that the frequency of FIPS producing the highest best - f b e s t and mean - f b e s t among the algorithms deteriorates with the increase of problem dimensionality and population size. Additionally, FIPS’s frequency in producing the lowest variants of solutions is consistent across the problem dimensionality and population size.
SaDE shows a decreasing trend as FIPS for best - f b e s t , mean - f b e s t when the settings change from 10D25N to 50D50N. However, the decreasing trend is not as drastic as FIPS. Based on the comparisons of std - f b e s t , SaDE and FIPS have similar values for the settings, except 10D25N. SaDE’s std - f b e s t = 10 is higher than FIPS for the lower problem dimensionality and population size, 10D25N. The finding indicates that SaDE’s frequency in producing lowest variations of solutions is slightly lower at high problem dimensionality and population size.
The overall findings from the comparisons show that FIPSaDE has the highest probability of producing a better solution compared to its original variants, which are FIPS and SaDE, with an increase in problem dimensionality and population size. An EC algorithm commonly uses a large population size to solve an optimization problem with high dimensionality. However, the approach may be more effective for FIPSaDE than FIPS and SaDE. Based on the comparisons of h w i n for best - f b e s t and mean - f b e s t , FIPSaDE has the highest values, followed by FIPS and SaDE. Therefore, FIPSaDE consistently produces a group of better-quality solutions than its original variants when problem dimensionality and population size increase.
For most tested functions, FIPSaDE proves to produce better best - f b e s t and mean - f b e s t in the swarm space compared to the other algorithms as the problem space increases. FIPSaDE, a hybrid of both FIPS and SaDE, can maneuver and manage the swarm solutions more effectively. Therefore, FIPSaDE improves performance in finding the best fitness point as the problem space increases.
The variable mean - f b e s t for FIPSaDE, FIPS and SaDE is denoted as μ F I P S a D E , μ F I P S and μ S a D E , respectively. A t-test is performed with a significance level, α of 0.05 to evaluate the hypothesis whether μ F I P S a D E of the proposed algorithm is better than μ F I P S and μ S a D E or not, as shown below.
H 0 : μ F I P S a D E μ F I P S H 1 : μ F I P S a D E < μ F I P S H 0 : μ F I P S a D E μ S a D E H 1 : μ F I P S a D E < μ S a D E
The results of hypothesis tests for the paired FIPSaDE-FIPS and FIPS-SaDE based on different settings of problem dimensionality and population size are shown in Table 10. If the p-values are less α , they are bolded, indicating that the associated H 0 is rejected. As a result, there is enough evidence to accept H 1 . The p-values associated with rejecting H 0 are bolded and its frequency is denoted as h ( H 0 ) . At the bottom of the table, h ( H 0 ) shows the frequency H 0 being rejected for different settings. For the setting 10D25N, μ F I P S a D E is significantly better than μ F I P S and μ S a D E in 10 and 9 functions, respectively. When the setting increases from 10D25N to 50D50N, the values h ( H 0 ) for FIPSaDE-FIPS increase from 10 to 18 and 22. Therefore, the frequency that FIPSaDE is significantly better than FIPS increases. When similar comparisons are made for FIPSaDE-SaDE, an increasing trend is observed, but it is less obvious. The findings about h ( H 0 ) show that the performances of FIPSaDE are significantly better than FIPS and SaDE when the problem dimensionality and population size increase.

6. Conclusions

In this study, a hybrid of FIPS and SaDE called FIPSaDE is proposed, and its performance is validated by running the algorithm against the benchmark functions while also being compared to their respective original version, the SaDE and FIPS as well as it variants, such as DE and DE-PSO. The self-adaptation strategy of SaDE is adapted and maneuvered by the FIPS particle swarm, preventing the solutions from being trapped in the local region. Each algorithm can adaptively adjust the parameter values while the swarm is searching for the best solution needed. Based on different configurations of problem dimensionality and population sizes, the FIPSaDE algorithm consistently has the highest frequency of having lowest average errors than FIPS, DE, SaDE and DE-PSO. The frequency analysis of h w i n and the hypothesis test show that FIPSaDE performs better than its respective original versions in terms of the best and mean of f b e s t as the problems’ dimensionality increases. Future research will investigate the strength of proposed algorithms with other benchmark test functions. Since the current FIPSaDE is only tested on one topology, that is, Four Clusters, the impact of the other four other FIPS topologies, namely, All, Ring, Pyramid, Square, should be investigated in the future for their effects on the hybrid’s performances. Moreover, various performance metrics could be applied to strengthen the comparison among the algorithms. The study of the convergence profiles and coverage curve of the proposed algorithm as compared to other algorithms could also be investigated in the future. Furthermore, investigation should be conducted to broaden the comparison among the hybridization methods.

Author Contributions

Conceptualization, S.L.W., S.H.A. and T.F.N.; Data curation, S.L.W., S.H.A., H.I. and P.R.; Formal analysis, S.L.W., S.H.A. and P.R.; Funding acquisition, T.F.N.; Investigation, S.L.W. and S.H.A.; Methodology, S.L.W., S.H.A., H.I. and T.F.N.; Project administration, T.F.N.; Resources, S.H.A., H.I. and P.R.; Software, S.H.A., H.I. and P.R.; Supervision, S.L.W., H.I. and T.F.N.; Validation, S.L.W. and T.F.N.; Visualization, S.L.W., S.H.A. and P.R.; Writing—Original draft, S.L.W., S.H.A., H.I. and T.F.N.; Writing—Review and editing, S.L.W., H.I. and T.F.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Higher Education Malaysia under the Fundamental Research Grant Scheme (FRGS) with grant number FRGS/1/2022/ICT02/UPSI/02/1.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquires can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABCArtificial bee colony
ACAAnt colony algorithm
CGSSCentre of Global Sustainability Studies
DEDifferential evolution
ECEvolutionary computation
FIPSFully informed particle swarm
FIPSaDEFIPS-SaDE
GUIGraphical user interface
NPnondeterministic polynomial
PSOParticle swarm optimization
UPSIUniversiti Pendidikan Sultan Idris
USMUniversiti Sains Malaysia
SaDESelf-adaptive differential evolution

References

  1. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  2. Zhalechian, M.; Tavakkoli-Moghaddam, R.; Rahimi, Y. A self-adaptive evolutionary algorithm for a fuzzy multi-objective hub location problem: An integration of responsiveness and social responsibility. Eng. Appl. Artif. Intell. 2017, 62, 1–16. [Google Scholar] [CrossRef]
  3. Surekha, P.; Archana, N.; Sumathi, S. Unit commitment and economic load dispatch using self adaptive differential evolution. WSEAS Trans. Power Syst. 2012, 7, 159–171. [Google Scholar]
  4. Chen, Y.; Mahalex, V.; Chen, Y.; He, R.; Liu, X. Optimal satellite orbit design for prioritized multiple targets with threshold observation time using self-adaptive differential evolution. J. Aerosp. Eng. 2015, 28, 04014066. [Google Scholar] [CrossRef]
  5. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  6. Al-Anzi, F.S.; Allahverdi, A. A self-adaptive differential evolution heuristic for two-stage assembly scheduling problem to minimize maximum lateness with setup times. Eur. J. Oper. Res. 2007, 182, 80–94. [Google Scholar] [CrossRef]
  7. Ali, M.; Ahn, C.W. An optimized watermarking technique based on self-adaptive DE in DWT-SVD transform domain. Signal Process. 2014, 94, 545–556. [Google Scholar] [CrossRef]
  8. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  9. Parouha, R.P.; Verma, P. A systematic overview of developments in differential evolution and particle swarm optimization with their advanced suggestion. Appl. Intell. 2022, 52, 10448–10492. [Google Scholar] [CrossRef]
  10. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  11. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  12. Wang, S.L.; Morsidi, F.; Ng, T.F.; Budiman, H.; Neoh, S.C. Insights into the effects of control parameters and mutation strategy on self-adaptive ensemble-based differential evolution. Inf. Sci. 2020, 514, 203–233. [Google Scholar] [CrossRef]
  13. Al-Dabbagh, R.D.; Neri, F.; Idris, N.; Baba, M.S. Algorithmic design issues in adaptive differential evolution schemes: Review and taxonomy. Swarm Evol. Comput. 2018, 43, 284–311. [Google Scholar] [CrossRef]
  14. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  15. Neri, F.; Tirronen, V. Recent advances in differential evolution: A survey and experimental analysis. Artif. Intell. Rev. 2010, 33, 61–106. [Google Scholar] [CrossRef]
  16. Parouha, R.P.; Verma, P. An innovative hybrid algorithm for bound-unconstrained optimization problems and applications. J. Intell. Manuf. 2022, 33, 1273–1336. [Google Scholar] [CrossRef]
  17. Tao, X.; Guo, W.; Li, Q.; Ren, C.; Liu, R. Multiple scale self-adaptive cooperation mutation strategy-based particle swarm optimization. Appl. Soft Comput. 2020, 89, 106124. [Google Scholar] [CrossRef]
  18. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle Swarm Optimization: A Comprehensive Survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  19. Jain, M.; Saihjpal, V.; Singh, N.; Singh, S.B. An Overview of Variants and Advancements of PSO Algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
  20. Chen, Y.; Liang, J.; Wu, Y.; He, B.; Lin, L.; Wang, Y. Self-Regulating and Self-Perception Particle Swarm Optimization with Mutation Mechanism. J. Intell. Robot. Syst. 2022, 105, 1–21. [Google Scholar] [CrossRef]
  21. Wang, S.; Li, Y.; Yang, H. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization. Appl. Soft Comput. 2019, 81, 105496. [Google Scholar] [CrossRef]
  22. Dash, J.; Dam, B.; Swain, R. Design and implementation of sharp edge FIR filters using hybrid differential evolution particle swarm optimization. AEU Int. J. Electron. Commun. 2020, 114, 153019. [Google Scholar] [CrossRef]
  23. Yang, X.S. Nature-inspired optimization algorithms: Challenges and open problems. J. Comput. Sci. 2020, 46, 101104. [Google Scholar] [CrossRef] [Green Version]
  24. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.K.; Tasgetiren, M.F. Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [Google Scholar] [CrossRef]
  25. Ghosh, A.; Das, S.; Das, A.K.; Senkerik, R.; Viktorin, A.; Zelinka, I.; Masegosa, A.D. Using spatial neighborhoods for parameter adaptation: An improved success history based differential evolution. Swarm Evol. Comput. 2022, 71, 101057. [Google Scholar] [CrossRef]
  26. Do, D.T.; Lee, S.; Lee, J. A modified differential evolution algorithm for tensegrity structures. Compos. Struct. 2016, 158, 11–19. [Google Scholar] [CrossRef]
  27. Piotrowski, A.P. Review of differential evolution population size. Swarm Evol. Comput. 2017, 32, 1–24. [Google Scholar] [CrossRef]
  28. Borowska, B. Learning Competitive Swarm Optimization. Entropy 2022, 24, 283. [Google Scholar] [CrossRef]
  29. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  30. Cleghorn, C.W.; Engelbrecht, A. Fully informed particle swarm optimizer: Convergence analysis. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 164–170. [Google Scholar] [CrossRef]
  31. Pant, M.; Thangaraj, R.; Grosan, C.; Abraham, A. Hybrid differential evolution-particle swarm optimization algorithm for solving global optimization problems. In Proceedings of the 2008 Third International Conference on Digital Information Management, London, UK, 13–16 November 2008; pp. 18–24. [Google Scholar]
  32. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
Table 1. Details on the benchmark functions.
Table 1. Details on the benchmark functions.
DenotationTest FunctionSModality
F1Shifted Sphere Function[−100, 100]  D Unimodal
F2Shifted Schwefel’s Problem 1.2[−100, 100]  D Unimodal
F3Shifted Rotated High Conditioned Elliptic Function[−100, 100]  D Unimodal
F4Shifted Schwefel’s Problem 1.2 with Noise in Fitness[−100, 100]  D Unimodal
F5Schwefel’s Problem 2.6 with Global Optimum on Bounds[−100, 100]  D Unimodal
F6Shifted Rosenbrock’s Function[−100, 100]  D Basic Multimodal
F7Shifted Rotated Griewank’s Function without Bounds[0, 600]  D Basic Multimodal
F8Shifted Rotated Ackley’s Function with Global Optimum on Bounds[−32, 32]  D Basic Multimodal
F9Shifted Rastrigin’s Function[−5, 5]  D Basic Multimodal
F10Shifted Rotated Rastrigin’s Function[−5, 5]  D Basic Multimodal
F11Shifted Rotated Weierstrass Function[−0.5, 0.5]  D Basic Multimodal
F12Schwefel’s Problem 2.13[− Π Π D Basic Multimodal
F13Shifted Expanded Griewank’s plus Rosenbrock’s Function[−5, 5]  D Expanded Multimodal
F14Shifted Rotated Expanded Scaffer’s f 6 Function[−100, 100]  D Expanded Multimodal
F15Hybrid Composition Function[−5, 5]  D Hybrid Composition
F16Rotated Version of Hybrid Composition Function f 15 [−5, 5]  D Hybrid Composition
F17Rotated Version of Hybrid Composition Function f 15 with Noise in Fitness[−5, 5]  D Hybrid Composition
F18Rotated Hybrid Composition Function[−5, 5]  D Hybrid Composition
F19Rotated Hybrid Composition Function with Narrow Basin Global Optimum[−5, 5]  D Hybrid Composition
F20Rotated Hybrid Composition Function with Global Optimum on the Bounds[−5, 5]  D Hybrid Composition
F21Rotated Hybrid Composition Function[−5, 5]  D Hybrid Composition
F22Rotated Hybrid Composition Function with High Condition Number Matrix[−5, 5]  D Hybrid Composition
F23Non-continuous Rotated Hybrid Composition Function[−5, 5]  D Hybrid Composition
F24Rotated Hybrid Composition Function[−5, 5]  D Hybrid Composition
F25Rotated Hybrid Composition Function without BoundsInitialize population in [2, 5]  D
but no exact search range set
Hybrid Composition
Table 2. Parameter settings of DE, PSO, DE-PSO, SaDE and FIPS.
Table 2. Parameter settings of DE, PSO, DE-PSO, SaDE and FIPS.
ParameterDEPSODE-PSOSaDEFIPS
Population size, N p
(D = 10, D = 30, D = 50)
(25, 30, 50)(25, 30, 50)(25, 30, 50)(25, 30, 50)(25, 30, 50)
MAX _ FEs
(D = 10, D = 30, D = 50)
(5 × 10 4 , 3 × 10 4
5 × 10 4 )
(5 × 10 4 , 3 × 10 4
5 × 10 4 )
(5 × 10 4 , 3 × 10 4
5 × 10 4 )
(5 × 10 4 , 3 × 10 4
5 × 10 4 )
(5 × 10 4 , 3 × 10 4
5 × 10 4 )
Crossover rate, C r 0.1-0.95N (0.5 ± 0.1)-
Scale factor, F0.5-0.9F = (0, 2] with N(0.5 ± 0.3)-
Mutation strategyde/rand/1/bin-de/rand/1/binde/rand/1/bin,-
Acceleration rate ( c 1 c 2 )-(2.05, 2.05)(2.0, 2.0)-(2.05, 2.05)
Inertial weight, w-0.72980.9-0.7298
Table 3. Parameter settings of FIPSaDE.
Table 3. Parameter settings of FIPSaDE.
ParameterDimension
103050
Population size, N p 253050
MAX _ FEs 5 × 10 4 3 × 10 4 5 × 10 4
K runs per case303020
Crossover rate, C r N(0.5 ± 0.1)
Scale factor, F F = (0, 2] with N (0.5 ± 0.3)
Learning period 50
Acceleration rate, c 1 2.05
Acceleration rate, c 2 2.05
Inertial weight, w 0.7298
Neighborhood Type Self
Neighborhood Topology Four cluster
Symmetric Asymmetric
Table 4. Results of average error values for 10D test problem and N = 25 (10D25N).
Table 4. Results of average error values for 10D test problem and N = 25 (10D25N).
FunctionFIPSDESaDEDE-PSOFIPSaDE
F11.6913E+005.6843E-141.8948E-155.6843E-147.7548E-11
F21.7168E+002.5199E-015.6843E-141.8182E+019.2022E-11
F34.8917E+036.3587E+053.3801E+045.8865E+043.0103E+01
F41.0672E+012.0546E+027.0025E-112.1796E+019.2033E-11
F50.0000E+009.1538E+026.6696E-132.3041E-128.9070E-11
F65.0970E+012.6962E+001.2781E+001.3425E+055.3154E-01
F71.2671E+037.6374E-021.2670E+031.8818E-011.2670E+03
F82.0217E+012.0227E+012.0384E+012.0081E+012.0110E+01
F98.0601E+005.6843E-142.6532E-015.6843E-144.9748E-01
F108.6778E+001.6527E+016.9076E+001.0315E+015.8371E+00
F111.7263E+006.9768E+003.0350E+001.7695E+002.3054E+00
F122.8017E+022.4992E+015.6407E+021.5613E+033.0828E+02
F135.6892E-012.6653E-028.8579E-014.2952E-013.2816E-01
F142.1172E+003.0997E+003.1387E+002.2248E+001.9601E+00
F151.8014E+02-2.8100E+021.9472E+023.1617E+02
F161.0019E+02-1.0218E+021.1402E+021.0514E+02
F171.0719E+021.0722E+031.1966E+021.4561E+021.0136E+02
F187.6687E+021.6996E+037.0154E+027.8021E+027.8756E+02
F197.7130E+021.8139E+036.5847E+027.5198E+027.2585E+02
F207.4034E+021.7606E+036.8165E+027.6896E+026.4081E+02
F216.9422E+021.2334E+037.0185E+025.4388E+027.3634E+02
F227.1363E+021.2253E+037.7433E+027.4833E+027.5392E+02
F237.8007E+021.2661E+038.5139E+027.9163E+027.9806E+02
F242.8126E+027.4243E+022.0000E+025.3977E+022.2654E+02
F251.7506E+037.4292E+021.7507E+035.4948E+021.7495E+03
h w i n 44666
Table 5. Results of average error values for 30D test problem and N = 30 (30D30N).
Table 5. Results of average error values for 30D test problem and N = 30 (30D30N).
FunctionFIPSDESaDEDE-PSOFIPSaDE
F11.0003E+031.3264E-141.0232E-131.3873E+026.6317E-14
F27.4895E+038.1760E+035.9137E+023.3581E+022.3457E-12
F39.6860E+061.2073E+073.0102E+061.4827E+068.9114E+04
F41.3059E+043.2880E+048.8330E+034.2988E+023.2211E+01
F55.0737E+031.0061E+044.8601E+031.9268E+033.1928E+03
F64.0001E+079.1524E+001.4404E+024.5495E+061.1960E+00
F74.6963E+031.5056E-024.6963E+032.4008E-024.6963E+03
F82.0904E+012.0861E+012.1049E+012.0349E+012.0887E+01
F99.1200E+011.5158E-141.8278E+012.2971E+011.0267E+01
F101.1387E+021.8811E+021.1745E+026.2888E+014.1795E+01
F112.2143E+013.5289E+013.5379E+012.1214E+012.0232E+01
F129.3414E+045.5912E+031.6091E+041.8567E+046.4080E+03
F136.8397E+002.5628E-011.2032E+012.1437E+002.3846E+00
F141.1821E+011.2741E+011.3538E+011.2031E+011.1989E+01
F154.8635E+02-3.2749E+023.9304E+023.3594E+02
F162.4964E+02-1.8945E+022.5133E+021.2952E+02
F172.4655E+02-2.9869E+022.9779E+021.4873E+02
F189.1085E+021.2349E+039.0809E+029.0925E+029.1217E+02
F199.0797E+021.2258E+039.0760E+029.1250E+029.1158E+02
F209.1196E+021.2834E+039.1304E+029.0907E+029.0881E+02
F219.3243E+021.4813E+035.0000E+026.6047E+025.3210E+02
F228.9686E+021.6756E+039.4560E+028.5655E+028.9115E+02
F235.9064E+021.5085E+036.1697E+028.2535E+025.9916E+02
F244.1547E+021.4310E+032.0000E+026.4398E+022.2529E+02
F251.6313E+031.4382E+031.6448E+032.8872E+021.6171E+03
h w i n 25549
Table 6. Results of average error values for 50D test problem and N = 50 (50D50N).
Table 6. Results of average error values for 50D test problem and N = 50 (50D50N).
FunctionFIPSDESaDEDE-PSOFIPSaDE
F12.9123E+035.6843E-145.1044E-106.6018E+028.2423E-14
F21.9747E+042.6854E+044.0747E+031.8025E+032.7853E-12
F34.6307E+072.7198E+077.2181E+064.5704E+069.4374E+04
F42.9802E+049.4951E+042.7156E+042.5763E+038.5264E+01
F51.2415E+042.2436E+049.7093E+036.1975E+035.5761E+03
F61.1081E+081.7165E+013.4873E+024.1990E+071.7940E+00
F76.1953E+033.5798E-046.1953E+031.1065E-026.1953E+03
F82.1079E+012.1056E+012.1194E+012.1009E+012.1084E+01
F91.9239E+025.6843E-149.1163E+018.3410E+011.5024E+01
F102.5784E+024.8818E+022.8407E+021.3299E+029.3924E+01
F114.4393E+015.7573E+037.0201E+014.5260E+013.8998E+01
F124.2248E+052.3101E+047.0532E+041.0593E+051.6649E+04
F131.5238E+015.4804E-012.7075E+014.4169E+004.7292E+00
F142.1285E+012.2126E+012.3325E+012.0844E+012.1229E+01
F154.5708E+02-2.8346E+023.8232E+023.0640E+02
F162.3026E+02-1.9536E+021.9984E+027.8014E+01
F172.5320E+02-3.1450E+021.5794E+027.1639E+01
F181.0033E+031.4072E+039.3804E+029.2262E+029.5502E+02
F199.7666E+021.3165E+039.4219E+029.1941E+029.4735E+02
F209.7761E+021.3865E+039.4763E+029.1692E+029.4149E+02
F211.2067E+037.9999E+026.6371E+028.8581E+026.3703E+02
F229.5024E+021.2092E+039.5214E+029.0646E+029.2008E+02
F236.3049E+027.5279E+026.7595E+029.2719E+026.9500E+02
F248.5524E+021.2819E+032.0000E+025.6974E+022.0000E+02
F251.6662E+031.2397E+031.6795E+032.1600E+021.6539E+03
h w i n 141712
Table 7. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 10D25N.
Table 7. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 10D25N.
Function B e s t - f b e s t M e a n - f b e s t S t d - f b e s t
FIPSSaDEFIPSaDEFIPSSaDEFIPSaDEFIPSSaDEFIPSaDE
F1−450.00−450.00−450.00−448.31−450.00−450.002.700.000.00
F2−450.00−450.00−450.00−448.28−450.00−450.002.710.000.00
F3−449.993085.63−450.00−439.3333350.51−450.0015.1627,313.820.00
F4−449.99−450.00−450.00−439.33−450.00−450.0015.160.000.00
F5−310.00−310.00−310.00−310.00−310.00−310.000.000.000.00
F6390.30390.00390.00440.97391.28390.5386.731.661.38
F71087.051087.051087.051087.051087.051087.050.030.000.00
F8−119.88−119.78−120.00−119.78−119.62−119.890.050.090.10
F9−329.01−330.00−330.00−321.94−329.73−329.503.600.520.51
F10−327.02−328.01−328.01−321.13−323.09−324.163.472.573.21
F1190.8190.6790.5491.7393.0392.310.681.821.30
F12−460.00−460.00−460.00−179.83104.07−151.72678.161064.47702.17
F13−129.72−129.46−129.82−129.43−129.11−129.670.150.210.08
F14−298.72−297.77−299.74−297.88−296.86−298.040.450.380.61
F15121.83178.14160.45300.14401.00436.17129.47148.90140.78
F16120.00209.94211.25220.19222.18225.1420.737.336.98
F17207.88214.00201.17227.19239.66221.368.6617.618.77
F18310.00310.00310.00776.87711.547979.56239.17226.16169.84
F19310.00310.00310.00781.30668.47735.85242.09219.37219.42
F20310.00310.00310.00750.34913.31650.81277.37346.97230.60
F21660.00660.00660.001054.221061.851096.34275.44276.19220.53
F22660.911109.441084.871073.631134.331113.92133.2522.2729.15
F23914.00919.47919.471140.071211.391158.06178.64254.86271.52
F24460.00460.00460.00533.32460.00486.54130.480.00145.39
F2592.991991.621995.921948.741020.662009.47344.456.995.36
h w i n 1514201091361014
Table 8. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 30D30N.
Table 8. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 30D30N.
Function Best - f best Mean - f best Std - f best
FIPSSaDEFIPSaDEFIPSSaDEFIPSaDEFIPSSaDEFIPSaDE
F1−297.49−450.00−450.00550.32−450.00−450.00474.170.000.00
F22925.32−334.80−450.007039.46141.37−450.002090.36394.740.00
F34,662,113.70807,037.4119,291.799,685,580.443,009,720.4588,664.152,937,448.591,504,001.3763,037.56
F43522.983339.36−450.0012,609.088382.96−417.793289.583329.5368.06
F52752.592851.221828.144763.714550.142882.761143.851175.35534.97
F61,311,610.25393.12390.0040,001,127.07534.04391.2041,151,416.49195.571.86
F74516.294516.294516.294516.294516.294516.290.000.000.00
F8−119.18−119.08−119.24−119.10−118.95−119.110.050.060.05
F9−274.12−325.88−329.01−238.80−311.72−319.7315.947.874.57
F10−257.68−303.46−307.12−216.13−212.55−288.2019.6152.419.80
F11108.32110.59104.48112.14125.38110.231.495.282.56
F1244,816.641924.29−457.5492,954.3115,631.415948.0035,286.0113,703.439314.81
F13−125.85−120.51−128.55−123.16−117.97−127.621.591.150.59
F14−288.87−286.68−289.13−288.18−286.46−288.010.330.130.50
F15518.41230.34320.00606.35447.49455.9445.94106.5897.05
F16227.57179.27165.78369.64309.45249.52151.59138.42125.34
F17238.75297.00172.70366.55418.69268.73141.1192.36109.60
F18916.04810.00915.42920.85918.09922.176.0921.185.81
F19851.22810.00915.96917.97917.60921.5819.5420.834.63
F20915.69915.39810.00921.96923.04918.817.844.7020.93
F21953.46860.00860.001292.43860.00892.10260.100.00131.12
F221213.631245.751209.771256.861305.601251.1522.5841.0923.76
F23901.56896.74894.16950.64976.97959.1697.57133.89158.86
F24550.61460.00460.00675.47460.00485.2976.410.00138.53
F251880.421891.121868.001891.271904.771877.145.856.174.40
h w i n 172237186616
Table 9. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 50D50N.
Table 9. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 50D50N.
Function Best - f best Mean - f best Std - f best
FIPSSaDEFIPSaDEFIPSSaDEFIPSaDEFIPSSaDEFIPSaDE
F11153.25450.00−450.002462.33450.00−450.001044.250.000.00
F212,312.68967.61−450.0019,296.903624.73−450.002864.281944.030.00
F319,742,966.181.5526,213.6546,306,427.656,442,716.3893,924.1718,623,890.222,656,381.5854,148.58
F418,353.5211,796.54−448.8229,351.5626,706.41−364.747366.247776.9694.60
F58344.217515.123823.5512,104.899399.315266.121713.641378.09924.29
F613,540,381.90490.65390.00110,811,270.60738.73391.7955,887,725.95277.302.03
F76015.326015.326015.326015.326015.326015.320.000.000.00
F8−118.99118.75−118.97−118.92118.81−118.920.030.040.03
F9−174.99208.21−326.02−137.61238.84−314.9824.7515.114.92
F10−136.980.90−261.35−72.1660.49−236.0836.2673.0416.82
F11129.40156.93120.76134.39160.20129.002.631.753.64
F12241,193.3319,055.20−154.39422,016.0170,072.0216,188.50115,690.7840,227.1711,162.45
F13−116.3499.05−126.80−114.76102.93−125.271.392.250.94
F14−279.28276.38−280.33−278.71276.68−278.770.300.160.53
F15426.38320.00323.00577.08403.46426.4038.7781.8286.84
F16244.32191.77160.24350.26315.36198.01105.1878.9237.68
F17283.90378.02170.74373.20434.50191.64102.4056.0111.83
F18935.54810.00938.121013.28984.04965.0241.8134.8917.86
F19942.46935.18929.32986.66952.19957.3528.7011.1715.08
F20917.00939.19916.08987.61957.63951.4937.2315.2014.64
F211412.69860.00860.001566.711023.71997.0337.54258.81246.13
F221288.451268.201243.661310.241312.141280.089.7629.3424.74
F23928.47899.18899.12990.491035.951055.00105.67203.07216.18
F24728.10460.00460.001115.24460.00460.00301.480.000.00
F251912.161915.961902.751926.191939.451913.915.809.627.03
h w i n 262135217617
Table 10. Hypothesis test between μ F I P S a D E , μ F I P S and μ S a D E for the settings of 10D25N, 30D30N and 50D50N.
Table 10. Hypothesis test between μ F I P S a D E , μ F I P S and μ S a D E for the settings of 10D25N, 30D30N and 50D50N.
Function10D25N30D30N50D50N
t-Valuep-Valuet-Valuep-Valuet-Valuep-Value
FIPSaDE
-FIPS
FIPSaDE
-SaDE
FIPSaDE
-FIPS
FIPSaDE
-SaDE
FIPSaDE
-FIPS
FIPSaDE
-SaDE
FIPSaDE
-FIPS
FIPSaDE
-SaDE
FIPSaDE
-FIPS
FIPSaDE
-SaDE
FIPSaDE
-FIPS
FIPSaDE
-SaDE
F1−3.4428.070.001.00−11.555.600.001.00−12.47−7.20E+120.000.00
F2−3.4742.750.001.00−19.62−8.210.000.00−30.83−9.370.000.00
F3−3.86−6.780.000.00−17.89−10.630.000.00−11.10−10.690.000.00
F4−3.860.310.000.62−21.69−14.470.000.00−18.04−15.570.000.00
F566.3863.861.001.00−8.16−7.070.000.00−15.71−11.140.000.00
F6−3.18−1.900.000.03−5.32−4.000.000.00−8.87−5.600.000.00
F7−1.000.000.160.50−18.08−18.080.000.00−1.76−4.180.040.00
F8−5.46−11.410.000.00−1.37−11.750.090.000.51−2.05E+040.690.00
F9−11.411.760.000.96−26.73−4.820.000.00−31.44−155.850.000.00
F10−3.49−1.430.000.08−18.01−7.770.000.00−18.34−17.690.000.00
F112.16−1.790.980.04−3.53−14.130.000.00−5.37−34.530.000.00
F120.16−1.100.560.14−13.06−3.200.000.00−15.62−5.770.000.00
F13−7.76−13.780.000.00−14.38−40.840.000.00−27.94−417.890.000.00
F14−1.14−8.980.130.001.52−16.310.930.00−0.41−4.44E+030.340.00
F153.900.941.000.82−6.450.320.000.63−7.090.860.000.80
F161.241.600.890.94−3.34−1.760.000.04−6.09−6.000.000.00
F17−2.59−5.090.010.00−3.00−5.730.000.00−7.88−18.970.000.00
F180.391.670.650.950.861.020.800.84−4.651.940.000.97
F19−0.761.190.220.880.981.020.840.84−4.041.230.000.89
F20−1.51−3.450.070.00−0.77−1.080.220.14−4.04−1.300.000.10
F210.650.530.740.70−7.531.340.000.91−10.23−0.330.000.37
F221.62−3.050.940.00−0.96−6.280.170.00−5.07−3.740.000.00
F230.30−0.780.620.620.25−0.470.600.321.200.290.880.61
F24−1.301.000.100.84−6.581.000.000.84−9.72−1.010.000.16
F250.97−0.740.830.23−10.57−19.980.000.00−6.02−9.590.000.00
h ( H 0 ) --109--1817--2218
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, S.L.; Adnan, S.H.; Ibrahim, H.; Ng, T.F.; Rajendran, P. A Hybrid of Fully Informed Particle Swarm and Self-Adaptive Differential Evolution for Global Optimization. Appl. Sci. 2022, 12, 11367. https://doi.org/10.3390/app122211367

AMA Style

Wang SL, Adnan SH, Ibrahim H, Ng TF, Rajendran P. A Hybrid of Fully Informed Particle Swarm and Self-Adaptive Differential Evolution for Global Optimization. Applied Sciences. 2022; 12(22):11367. https://doi.org/10.3390/app122211367

Chicago/Turabian Style

Wang, Shir Li, Sarah Hazwani Adnan, Haidi Ibrahim, Theam Foo Ng, and Parvathy Rajendran. 2022. "A Hybrid of Fully Informed Particle Swarm and Self-Adaptive Differential Evolution for Global Optimization" Applied Sciences 12, no. 22: 11367. https://doi.org/10.3390/app122211367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop