Next Article in Journal
Design Evaluation and Optimization of Population Pharmacokinetics Model Using an R Package PopED
Previous Article in Journal
Inequalities That Imply the Norm of a Linear Space Is Induced by an Inner Product
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems

1
Department of CS, International Islamic University Islamabad, Islamabad 44000, Pakistan
2
Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan 29220, Pakistan
3
Higher Polytechnic School, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
4
Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
5
Fundación Universitaria Internacional de Colombia, Bogotá, Colombia
6
Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
7
Universidade Internacional do Cuanza, Cuito, Bié, Angola
8
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(21), 4406; https://doi.org/10.3390/math11214406
Submission received: 18 September 2023 / Revised: 15 October 2023 / Accepted: 20 October 2023 / Published: 24 October 2023

Abstract

:
Particle swarm optimization (PSO) is a population-based heuristic algorithm that is widely used for optimization problems. Phasor PSO (PPSO), an extension of PSO, uses the phase angle θ to create a more balanced PSO due to its increased ability to adjust the environment without parameters like the inertia weight w. The PPSO algorithm performs well for small-sized populations but needs improvements for large populations in the case of rapidly growing complex problems and dimensions. This study introduces a competitive coevolution process to enhance the capability of PPSO for global optimization problems. Competitive coevolution disintegrates the problem into multiple sub-problems, and these sub-swarms coevolve for a better solution. The best solution is selected and replaced with the current sub-swarm for the next competition. This process increases population diversity, reduces premature convergence, and increases the memory efficiency of PPSO. Simulation results using PPSO, fuzzy-dominance-based many-objective particle swarm optimization (FMPSO), and improved competitive multi-swarm PPSO (ICPPSO) are generated to assess the convergence power of the proposed algorithm. The experimental results show that ICPPSO achieves a dominating performance. The ICPPSO results for the average fitness show average improvements of 15%, 20%, 30%, and 35% over PPSO and FMPSO. The Wilcoxon statistical significance test also confirms a significant difference in the performance of the ICPPSO, PPSO, and FMPSO algorithms at a 0.05 significance level.

1. Introduction

Particle swarm optimization (PSO) was initially introduced by Kennedy and Eberhart [1]. PSO is a simple stochastic searching technique for optimization that is motivated by the ordinary swarming comportment of bird clustering and fish schooling. The performance of PSO in finding virtuous solutions for optimization problems is very good. The PSO algorithm has the advantage of fast convergence, easy code implementation, less computational complexity, and few parameter adjustments [2,3]. Instead of using only the fittest particle, PSO uses all particles of the population for computation due to its social behavior. All particles update their positions as determined by their individual best position ( p b e s t ) and overall best position ( g b e s t ).
The productivity of PSO is assessed by the minimum number of iterations needed to find an optimum solution with the indicated accuracy and minimal computation. The performance of the PSO algorithm highly depends on the selection of parameters. In PSO parametric modification, PSO parameters are adjusted to improve the convergence and exploration capabilities [4]. Parameter adjustments include modification of the inertia weight, cognitive factor, social factor, techniques for defining the own best p b e s t and overall best g b e s t , and different prototypes for the velocity update. The inertia weight w was introduced by Bansal et al. [5] to maintain and stabilize a broader scope of the search. To improve PSO using the inertia weight, different strategies like the growing inertia weight [5], falling inertia weight, and adaptive inertia weight are used.
Numerical optimization problems are important in computing and are commonly solved using evolutionary computing algorithms such as PSO and differential evolution [6]. When considering optimization problems, PSO has been successfully applied in both the continuous and discrete domains [7]. Among several discrete PSO variations, binary PSO [7] is possibly the most well-known model, and it has been applied to many problems, e.g., job-shop scheduling [8]. PSO is a heuristic algorithm like a genetic algorithm; however, it is computationally less expensive [9]. PSO parameters depend on specific applications and are adaptive according to the application. In multi-objective optimization problems, the PSO algorithm with multiple sub-populations has been used to achieve prominent results [10].
PSO has been efficiently applied in many real-world problems. Most real-world problems have increasing complexity, so the proficiency and effectiveness of PSO need to be continuously improved. Despite the many advantages of PSO, there are still certain research gaps that require attention. These research gaps include early convergence, memory efficiency, slow convergence toward the global optimum, PSO without parameters, and slow computational speed for large populations. Many variants of PSO have been successfully developed to handle large populations, where the population is divided into multiple sub-swarms, and each particle maintains the information of the local best. In the cooperative approach, the whole population is divided into many sub-swarms. Each sub-swarm coevolves with the others to form a complete solution. In competitive coevolution, the population is divided into many sub-swarms, and two sub-swarms are selected to compete for coevolution, with the swarm with the best fitness earning the right to represent itself.
Efficient and effective information sharing between sub-swarms is also an important research area, where each sub-swarm shares its individual best fitness with the others. To enhance performance and population diversity, a competitive coevolution process is applied to the sub-swarms. This improves the performance and population diversity of algorithms. Phasor PSO (PPSO) provides efficient results compared to other PSO variants in many multidimensional optimization problems. Introducing the phase angle in PPSO enhances the effectiveness and adaptability of the algorithm. However, there is still a need to modify PPSO to solve global optimization problems. In the case of a small population, PPSO achieves effective results but in a large population, PPSO needs to efficiently handle a large number of particles.
PPSO lacks population diversity due to a lack of competitive processes during evolution. This reduced diversity results in the degradation of the performance of PPSO due to stagnation in local optima. Diversity is incorporated by employing the competitive coevolution process, where the fitness of individuals is estimated following the exchange of information with individuals from other sub-populations.

1.1. Research Motivation

PPSO lacks population diversity due to a lack of competitive processes during the evolution of large-sized populations. Consequently, the results become trapped in local optima. This diversity issue reduces the convergence speed of the PPSO algorithm and ultimately reduces its performance. The inclusion of a competitive process for exchanging information during the evolutionary process of PPSO helps achieve convergence more quickly without compromising on performance.

1.2. Problem Statement

In the case of a large population, the PPSO algorithm suffers from a slow convergence speed due to the large number of particles. The selection of a suitable technique for dividing the population into multiple sub-swarms is crucial for PPSO. A large population size requires a large memory size to manage operations. PPSO spends more time and computational load in managing memory for a large population, leading to premature convergence within the population, which results in increased time complexity. PPSO requires interaction among all individual algorithms to reduce both the space and time complexity of the algorithm. The best individuals who share their values with the population help increase the performance of the algorithm.

1.3. Research Significance

The incorporation of a competitive coevolution process in PPSO helps improve the performance of the PPSO algorithm. In competitive coevolution, the process improves population diversity and the performance of the algorithm. The coevolutionary process makes PPSO more memory efficient and adaptable, resulting in a higher efficacy rate. The increased complexity in terms of dimensions, number of particles, and population range shows that the improved competitive multi-swarm phasor PSO (ICPPSO) performs better.

1.4. Research Contributions

This study makes the following contributions:
  • A cooperative coevolution concept is incorporated into PPSO, which helps enhance diversity in the population and avoid local optima issues.
  • The hybridization of PPSO with the competitive coevolution method helps enhance the performance of the PPSO algorithm.
  • The experimental results of the average fitness performance metric and the results of the proposed algorithm are presented by using six standard benchmark functions.
The rest of this study is divided into six sections. The operation of PSO and its parameters are described in Section 2. Section 3 discusses the existing literature and PSO variants. The operation of PSSO and its related challenges are presented in Section 4. Section 5 elaborates on the proposed approach. The experimental results and discussions are provided in Section 6. Finally, the conclusions are presented in Section 7.

2. Operation of PSO and Its Parameters

PSO is regarded as a well-organized population and resistor parameter-based procedure for the universal optimization of different problems. PSO algorithms and GAs share several common aspects. The initialization of the population with a random solution and the subsequent generation updates to search for optima are almost the same. Genetic operators, like crossover and mutation operators, are not used in PSO because of its social behavior. So, in PSO, each particle progresses by cooperating and competing with other individuals [11]. Each particle modifies its movement according to its own best-reached position and the overall best position among all particles.
In PSO, every probable solution is characterized as a particle. Each particle has certain parameters that continuously adjust to update its position. These parameters include the current position, speed of the particle, and best position found by the particle thus far. At the beginning of PSO, all particles in the population are initialized with a random position, and their velocity is set to 0. In PSO, every particle in the dimensional space D is treated as a point. X I represents the position of the i t h particle and calculates the existing quality of each particle. Each particle in the population updates its velocity V to evaluate the tracking of its movement at t iterations, where X I T = ( x i 1 , x i 2 , x i 3 , , x i D ) signifies the position of particles and V I T = ( v i 1 , v i 2 , v i 3 , , v i D ) represents the velocity of particles for the d t h dimension ( d = 1 , 2 , , D ) . These parameters are updated using Equations (1) and (2).
V i d k + 1 = v i d k + 1 + c 1 × r 1 × ( p b e s t i d k x i d k ) + c 2 × r 2 × ( g b e s t i d k x i d k )
X i d k + 1 = X i d k + V i d k + 1
where c 1 and c 2 are acceleration controller coefficients, r 1 , and r 2 are randomly generated coefficients within the range of (0, 1), X is the position of a particle, and k represents the current iteration. The range [ v m a x , v m a x ] is defined for the velocity V i to stop the particle from moving outside the problem exploration area. P b e s t is computed through a cognitive learning approach, in which the best solution thus far from the current particle is stored. G b e s t is computed through a social learning approach, in which the best solution thus far from every particle in the population is stored. P b e s t and G b e s t are calculated using Equations (3) and (4).
p b e s t i d k + 1 = p b e s t i d k i f   f ( p b e s t i d k ) f ( X i d k + 1 ) p b e s t i d k + 1 = X i d , O t h e r w i s e }
G b e s t i d k + 1 = p b e s t i d k I F   f ( p b e s t i d k + 1 ) f ( G b e s t i d k ) G b e s t i d k + 1 = G b e s t i d k , O t h e r w i s e }
In Equation (3), P b e s t is selected using the greedy criteria based on the fitness of the new position and the current position of the particle. Equation (3) is used to update the P b e s t of each population member in each iteration. G b e s t in Equation (4) represents the best particle overall.

2.1. Inertia Weight

The success of the optimization algorithm is analytically dependent on accurate stability among local and global searches during the progress of all iterations. To maintain this balance between local and overall searches, Shi et al. [12] proposed a fresh parameter w for the velocity update equation, called the inertia weight. The velocity update Equation (5) is expressed as
V i d k + 1 = ω × v i d k + c 1 × r 1 × ( p b e s t i d k x i d k ) + c 2 × r 2 × ( g b e s t i d k x i d k )
where V i d is the velocity, and ω is the inertia weight.
Equation (5) is used to update the velocity for the updating of the particle position of the population of the swarm or sub-swarm. If the cost of inertia is high, particles have a higher probability of traveling to new search areas. However, if the cost of inertia is small, the probability of exploring the search area is reduced, resulting in fewer updates to the particles’ velocity. At the initial value of inertia, w remains constant at 0.4 throughout the search process, but later, researchers employed different strategies and made changes to the inertia weight. These strategies can be classified into three types. To determine the optimum value in dynamic locations, Eberhart and Shi [13] effectively used the random cost of the inertia weight. The second type is a time-varying inertia weight approach, in which the inertia weight differs with the number of iterations over time. A linearly decreasing inertia approach was introduced by Lei et al. [14], which effectively contributed to the positive modification of PSO characteristics. Similarly, other linear methods [15] and nonlinear approaches [16] have proven to be effective inertia weight approaches. Arumugam et al. [17] introduced a method where the inertia weight is estimated from the proportion of the G b e s t fitness and mean of the best finesses in every iteration. The selection of the inertia weight strategy is always problem-dependent.

2.2. Cognitive and Social Learning Coefficients

The cognitive and social learning coefficients also play an important role in bringing stability to local and global searches. In Equation (5), c 1 is the rational coefficient and c 2 is the social learning component. Equation (6) represents the cognitive part, and Equation (7) represents the communal learning part. Initially, the values of c 1 and c 2 were equal and set to 2.0 by Shi et al. [4]. The cognitive component c 1 controls the footstep size of particles taken toward the personal best solution, and the social coefficient c 2 controls the footstep size taken toward the global best solution. The social component c 2 has a greater impact on improving convergence compared to c 1 . Time-varying cognitive coefficients were implemented by Cai et al. [18], which focus on convergence speed in the first phase, and the second phase focuses on global search competency. Large values of the cognitive coefficient c 1 and social coefficient c 2 equate to an enormous local search ability, and small values of c 1 and the social coefficient c 2 equate to an enormous global search space. In Equation (6), r 1 and r 2 are randomly generated numbers, CP is the cognitive part, and S P is the social part of the velocity.
C P = c 1 × r 1 × ( p b e s t i d k x i d k )
Equation (6) is used to calculate the relative contribution of the cognitive part by utilizing the p b e s t position of the current individual.
S P = c 1 × r 1 × ( g b e s t i d k x i d k )
The social part in the velocity is calculated using Equation (7), which focuses on the contribution of the g b e s t individual.

2.3. Pseudocode of PSO Algorithm

In the Algorithm 1 of PSO, all particles X of the population are initialized with random spots, and the fitness of each particle is computed. Then, the velocity V is updated using the velocity update equation, followed by the position update of particle X. P b e s t and G b e s t are updated for each particle in the population. This process is repeated unless the termination criteria are reached.
Algorithm 1 Pseudocode for the PSO algorithm.
Require: 
X is an individual, N P is the population size, V is the velocity, P b e s t is the personal best position, G b e s t is the personal best position, c 1 and c 2 are the social and local amplification factors, r 1 and r 2 are two random numbers, and k is the iteration number.
Ensure: 
Evolved population with G b e s t at the optimal position to achieve optimal fitness values of the problem.
1:
for whole population do
2:
    Initialization of all particles
3:
end for
4:
while maximum iterations not reached do
5:
    for each particle do
6:
        Calculate the fitness of particles
7:
        if fitness value is better than the best fitness value ( P b e s t ) in history then
8:
            Set current value as the new P b e s t
9:
        end if
10:
      Select the particle with the best fitness value of all the particles as the G b e s t
11:
   end for
12:
   for each particle do
13:
      Calculate particle velocity according to the given equation
14:
       V i d k + 1 = ω × v i d k + c 1 × r 1 × ( p b e s t i d k x i d k ) + c 2 × r 2 × ( g b e s t i d k x i d k )
15:
      Update particle position according to Equation (2)
16:
       X i d k + 1 = x i d k + V i d k + 1
17:
   end for
18:
end while

3. Particle Swarm Optimization Variants and Literature Review

The particle swarm optimization algorithm is a well-known evolutionary computing optimization that is known to be very effective for solving real-world optimization problems [19]. Many variations of PSO algorithms have been presented, including various swarms, fresh effective learning policies, diversity-retaining strategies, and hybrid algorithms, to resolve several optimization complications. These enhancements of PSO include the population initialization process, adjustment of parameters (inertia weight, coefficients, p b e s t , g e b e s t ), sub-swarm techniques for large-scale optimization, and hybrid methods with other algorithms. For population enhancement, different techniques can be used.
PSO-sono was introduced by Meng et al. [20] for numerical optimization problems. They presented a hybrid paradigm based on the sorted swarm, adaptation schemes for the constriction coefficient and the paradigm ratio, and fully informed variations of the PSO algorithm. The experimental results showed the competitiveness of the presented enhancement of the existing PSO for standard benchmark functions. In terms of modified PSO, the concept of distance-based enhancement was presented by Lazzus et al. in [21]. A random, guided new direction helped improve the search capability of the PSO algorithm. The presented method showed significant improvements compared to standard PSO using benchmark optimization functions.
Nabi et al. [22] introduced task scheduling-based adaptive PSO in their research work to balance loads in cloud computing. They adaptively updated the inertia weight to update the velocity of the PSO algorithm using an adaptive linear decreasing approach. The research results demonstrated significant improvements compared to five other inertia-weight techniques used in the PSO algorithm. The concept of an optimal control parameter in PSO was introduced by Eltamaly in his research work on energy systems [23]. The author used two nested PSOs to optimize the control parameters, where the inner PSO was used as a fitness function for the outer PSO. The experimental results showed that the presented approach helped optimize parameters for standard benchmark problems.
Quantum PSO is a new discrete PSO algorithm that utilizes the concept of a quantum individual. Quantum PSO utilizes the concept of a quantum bit within the quantum particle. The quantum bit can probabilistically take a value of 0 or 1 upon random observation [24]. The concept of soliton-based quantum-behaved PSO was presented by Fallahi and Taghadosi to solve optimization problems in their research work [25]. In non-linear situations, solitons can rearrange and reproduce themselves stably without becoming trapped. The experimental results of soliton quantum-behaved PSO showed significant improvements when considering probability density function-based motion scenarios.
Quantum-based PSO has also been applied to task scheduling in device–edge–cloud cooperative computing [26]. Many other variants of the PSO algorithm are available, such as bare-bone PSO [27], stochastic PSO [28], self-adaptive PSO [29], and multi-population PSO [30].

3.1. Phasor PSO

PPSO is a new and improved, simple, and adjustable PSO model proposed by Ghasemi et al. [31]. PPSO is built with the addition of the phase angle θ to particle update equations, aiming to achieve optimal results in high-dimension optimization problems. In PPSO, each control parameter generated by the algorithm is merged with the phase angle. The increase in optimization efficiency is the most significant advantage of PPSO. Because of the phase angle ( θ ), PSO becomes a non-parametric algorithm with simpler calculations. In PPSO, episodic trigonometric functions, e.g., s i n and c o s , are used as control parameters.

3.2. Single Population Approach

Initially, PSO works best with small population sizes. However, modifications and upgrades by researchers have made PSO efficient for large populations and multimodal problems. PSO works better with different population sizes depending on the given problem [32]. In PSO, the social iteration within the population is one of the key factors of the algorithm. The population has a communication structure among particles that enables them to collectively share search space experiences, aiming to solve complex problems and improve population diversity. This social network for information exchange is called topology. PSO has three commonly used topologies, as shown in Figure 1 [33].
The global topology shown in Figure 1 is also known as the star topology, in which all the particles in the swarm are interconnected. Each particle in the population can share information directly. The local topology, also known as the ring topology, is where all particles in a swarm have only two neighbor particles to share information with. The von Neumann topology is the local best topology, in which each particle in a swarm has only four neighbor particles to share information with. In the von Neumann topology, particles are arranged in a grid-like structure. All three topologies increase PSO performance depending on the given problem.

3.3. Synchronous and Asynchronous Approaches to PSO

Original PSO uses a synchronous approach, in which the position and velocity of the element are reorganized after the entire swarm ends the current iteration. This kind of PSO is also known as S-PSO. S-PSO offers worthy information to all particles of the population. In S-PSO, each particle of the swarm has the benefit of picking a healthier neighbor and exploring the material delivered by this neighbor. However, premature convergence is a common shortcoming of S-PSO. In asynchronous PSO (A-PSO) [34], once the performance of the particle has been calculated, the p b e s t is updated immediately. In A-PSO, particle information is upgraded by the current iteration instead of information from the previous iteration. A-PSO has a shorter execution time but inadequate information due to the upgrading of information during current iterations. Due to reliable solution quality and robust exploitation, S-PSO performs better than A-PSO. So, the selection of the approach between S-PSO and A-PSO is problem-dependent. In some problems, S-PSO performs well, whereas in others, S-PSO shows good performance.

3.4. Multi-Population Approach

In PSO, population diversity is increased by dividing the entire population into multiple substitute swarms [35]. At the start of the algorithm, the entire population is divided into multiple sub-swarms. Then, the PSO algorithm is applied to each sub-swarm, and each sub-swarm computes the best results of its region. In the last part, all sub-swarms share their best results and compute a single result for the whole population. Different evolutionary and coevolutionary methods are used to exchange information among sub-swarms.

3.5. Modified PSO for Multimodal Function

In the original PSO, the whole population is used for computation, but in the modified PSO (MPSO), the whole population is divided into multiple sub-swarms regarding the arrangement of the particles [36]. The best particle in each sub-swarm is stored as the local best L b e s t of the current sub-swarm. Instead of using the global best in the velocity update, the L b e s t is used as the best particle of each sub-swarm. Now, the velocity update equation is modified, as shown in Equation (8)
V i d k + 1 = v i d k + c 1 × r 1 × ( p b e s t i d k x i d k ) + c 2 × r 2 × ( L b e s t i d k x i d k )
Instead of using the G b e s t in the velocity update equation, each sub-swarm uses its best particle thus far within its range. Because of the multiple sub-swarms, several optimal solutions, like the L b e s t and P b e s t , can be found, which can be useful in multimodal optimization problems. For a population p, with M as the total number of sub-populations and N as the total quantity of particles in the population, the number of particles in each sub-population is calculated using N M .

3.6. Multi-Adaptive Strategy-Based PSO

In multi-adaptive strategy-based PSO (MAPSO), an entire population is divided into several small-size sub-swarms [37]. Two adaptive strategies, i.e., adaptive learning exemplars and adaptive population size, are introduced into the sub-swarms’ mechanism to improve the comprehensive performance of MAPSO. According to the fitness value, the search particle in a sub-swarm can adaptively select its own learning particles. Throughout the entire optimization process, computational resources are rationally distributed based on the adaptation of the population. The adaptive learning strategy facilitates a favorable search behavior. In the adaptive population size strategy, the particle deletion process can speed up the convergence of MAPSO. Introducing more particles generated with the help of the differential evolution algorithm into the current population in the adaptive population size strategy can provide more helpful information. MAPSO is a relatively time-consuming variant of PSO, especially on simple uni-modal functions. It is more appropriate for complex problems rather than simple uni-modal problems.

3.7. PSO Based on Multi-Exemplar and Forgetting Capabilities

The multi-exemplar and forgetting capabilities of PSO are employed in a new version of PSO, called expanded PSO (XPSO) [11]. Initially, XPSO enhances the social learning aspect of each particle by using certain exemplars, learning from both the best particle in the local neighborhood and the best experience from the entire population, referred to as the global best. Then, diverse forgetting abilities are assigned to different particles by XPSO. In addition, the acceleration coefficients of each particle can be adjusted during the evolution process. It is very important to select a good neighbor for each particle to extract more useful information from the exemplar to provide positive guidance for the particles. In XPSO, a random order of numbers is assigned to particles, and neighbors are determined for the current particle based on this assigned random order. Although XPSO demonstrates reasonable performance compared to other PSO variants, two areas have room for further improvement and research. One is the efficiency of the newly introduced parameter in XPSO. The other involves finding ways to extract more valuable knowledge from the collective experiences of the entire population, and then applying this information to the parametric adjustment and learning models of PSO.

3.8. Multiple Archive-Based PSO

The idea of triple archive-based PSO was introduced by Xia et al. in their research work [38]. They effectively addressed the issues related to learning models and handled proper exemplar selection in their paper. They stored the proficient individuals in the archive and then reused the archive.

3.9. Coevolution Algorithm in Multi-Population Problems

Cooperative coevolution and competitive coevolution are two categories of coevolution algorithms. These approaches have been used by numerous researchers in their research on various versions of the PSO algorithm. The coevolution algorithm is a technique that provides an extension of the evolutionary algorithm for multi-objective and large population problems [10]. Coevolution improves diversity and reduces the risk of premature convergence of the whole population. It also aids in decomposing the problem into sub-parts [39]. In coevolution, the whole population is divided into two or more sub-populations, and the progress of all sub-populations is accelerated simultaneously [40]. In each sub-population, the fitness of particles is evaluated based on collaboration with individuals from other populations. In the evolutionary algorithm, each sub-population advances by manipulating its own values. So, the performance of the coevolutionary algorithm depends on direct communication between two or more particles from different sub-populations. In evolutionary algorithms, the fitness of particles is determined empirically and is independent of population circumstances. While the coevolutionary algorithm uses a biased approach, the fitness of an individual is estimated after an exchange of information with individuals from other sub-populations [38].
Niu et al. [41] introduced collaborative and competitive versions of multi-swarm cooperative PSO using the concept of master and slave swarms. Diversity in multiple slave swarms was maintained by independently running PSO on each slave swarm, which then communicated with the master swarm to evolve the knowledge of the master swarm. Wang et al. [42] presented a two-step cooperative PSO, where a two-swarm strategy is used in the first step to perform dimension partition and integration using a cooperative strategy. The velocity is controlled adaptively in the second step using an amplification factor of the velocity with a value of 0.9 to control the landscape of the considered problem. Hu et al. [43] tackled the problem of path failure in reconstructing the network topology by introducing an immune cooperative particle swarm optimization algorithm in the domain of heterogeneous wireless sensor networks. They considered macro-nodes and source sensors, creating sub-swarms in the form of k disjoint communication paths to provide alternative paths in case of broken paths. They used the immune cooperative mechanism to enhance the global search capability. The concept of adaptive cooperative PSO was introduced by Wang [44] to tackle the curse of dimensionality in cooperative PSO by dividing the swarm into sub-swarms of smaller dimensions. They controlled the exploration and exploitation capabilities of sub-swarms by exchanging information cooperatively using the adaptive inertia weight.
Li et al. [45] introduced a mixed mutation strategy-based multi-population cooperation PSO for higher-dimensional optimization problems. Multi-population cooperation PSO utilized the mean learning strategy based on dynamic segments in the coevolution process for information sharing. Covariance guidance-based multi-population cooperative PSO [35] divides the population into inferior, exploratory, and elite groups based on the Euclidean distance from the global leading particle. Cooperation between the inferior group, exploratory group, and inferior group is used in the cooperative process to maintain the balance between exploration and exploitation through information exchange. The concept of multiple populations for multiple objectives was introduced in multiple populations of coevolutionary particle swarm optimization [46]. The authors’ presented algorithm was used for financial management in selecting specific values of stocks while adding cardinality constraints to balance return and risk to obtain a feasible solution. Health monitoring systems normally exchange data and information in closely cooperating medical applications. Tang et al. [47] introduced coevolution-based quantum-behaved PSO to optimize the allocation of resources in the cognitive radio sensor network domain. The experimental results showed excellent performance in the considered domain. Madni et al. [48] introduced the concept of cooperative coevolution-based multi-guide PSO by applying cooperative coevolution to each objective of the sub-swarm, aiming to reduce computational costs. The cooperative coevolution-based multi-guide PSO algorithm exhibited excellent performance in high-dimensional optimization problems.

3.9.1. Cooperative Coevolution Mechanism

Cooperative coevolution was introduced by Van den Bergh and Frans [49]. In cooperative coevolution, the initial population is divided into multiple sub-swarms. Then, the particle under assessment from the current sub-swarm is estimated by gathering the best particles from other sub-populations to form a complete solution. After the evaluation of each particle, the archive is updated. The archive stores the leading individuals throughout the evolution. A completely appropriate solution formed by the sub-swarms will be placed into the archive if no other leading solutions are found. If any leading solution is found during the iterations, the appropriate solution is replaced with the leading solution.
The sub-swarms are assessed in a repetitive mode in cooperative coevolution. The parameters of the current sub-swarm are updated before advancing to the next population. The arrangements of the current sub-swarm will be updated before moving to the succeeding sub-swarm. This method of updating arguments is based on a reflexive, anti-symmetric, and transitive order, such as using Pareto ranks and niche counts in commands to resolve rank ties. Pareto ranks use the following Equation (9) for ranking:
R a n k ( i ) = 1 + n i
where n i is the number of dominant archive members of the i t h particle.
The lower-ranked particle is selected. In the event of a tie between two particles, the one with a lower niche count is chosen. The selection of a dominant particle helps increase the diversity of the overall solution. Cooperative coevolution has significantly improved the performance of team objectives.
At the start of the competitive coevolution mechanism, the whole population can be divided into multiple sub-swarms, and the initially selected variable for the probability of sub-swarms is initialized with the help of a uniform distribution. In a uniform distribution, the chance of selection of each sub-swarm is equal. So, v a r i a b l e 1 is initialized using 1/D for uniform probability selection. After the first iteration, the variable of probability ( v a r i a b l e 1 ) can be upgraded depending on the process of competition between sub-swarms. Initially, the cycle probability variable is allocated to the i t h sub-swarm, and the competitor sub-swarm is designated using roulette wheel selection. After the selection of two sub-swarms, the solution of the competitor and current sub-swarm coevolve with all further sub-swarms to form two new sub-swarms. The sub-swarm with the improved solution is selected and included in the population with a probability selection variable in the next iteration. Now probability variable ( v a r i a b l e 1 ), which is first initialized with uniform probability, is updated using Equation (10).
P i j ( k ) = P i j ( k 1 ) ± 1 D α
where α denotes the level of learning. So, the value of p in the i t h sub-swarm is increased if the i t h sub-swarm is more adapted by decision v a r i a b l e 1 . It is used for the cooperative coevolution process in the next iteration of MOPSO.

3.9.2. Competitive Coevolution Mechanism

In competitive coevolution, variable ‘p’ is allocated to each sub-swarm, indicating the probability of signifying certain sub-swarms. Competitive coevolution used a different strategy compared to cooperative coevolution, in which the population is divided into multiple swarms, but only two sub-swarms—the recent sub-swarm and contestant sub-swarm—can contest to be representative of variable1. Only one sub-swarm (current or contestant) can be representative at one time. In competitive coevolution, fitness implies only the robustness of the sub-swarm; improvement in one sub-swarm decreases the performance of the other sub-swarm. The continuous competition between two solutions, defeating each other, results in an increased solution quality. The evolution of the whole population in the form of sub-swarms helps prevent the solution from becoming trapped in local optima.

4. Phasor PSO Challenges and Limitations

PPSO is an improved version of PSO that uses episodic trigonometric functions, like s i n and c o s , as control parameters. The isolated nature of s i n and c o s is harnessed to characterize all control parameters of PSO. To meet this goal, each particle is linked with a one-dimensional phase angle θ . The initial value of the inertia weight is set to 0. So, the velocity update Equation (11) of PSO is now changed with the phase angle θ and an updated position calculation using Equation (12).
v l t e r = p ( θ i l t e r ) × ( p b e s t i l t e r X i i t e r ) + g ( θ i l t e r ) × ( G b e s t i l t e r X i i t e r )
The phase angle for the social aspect of PPSO is represented by p( θ i l t e r ) in Equation (11), and the phase angle for the cognitive aspect is represented by g ( θ i l t e r )
X i l t e r + 1 = X i l t e r + V i l t e r
The position of PPSO is updated by using the phasor velocity V i in Equation (12).
In the velocity update of PPSO, p ( θ i l t e r ) and g ( θ i l t e r ) are calculated using trigonometric functions of sine and cosine with the angle θ in the following equations:
p ( θ i l t e r ) = | cos θ i l t e r | 2 × sin θ i l t e r
g ( θ i l t e r ) = | cos θ i l t e r | 2 × sin θ i l t e r
The social aspect of PPSO uses a phasor-based amplification factor denoted as p ( θ i l t e r ) in Equation (13) utilizing sine and cosine functions, and the cognitive aspect uses a phasor-based amplification factor denoted as g ( θ i l t e r ) in Equation (14).
In PPSO, initially, the population is randomly generated in the D-dimensional space, similar to the original PSO but with the addition of the phase angle θ for each particle, through a uniform distribution θ i l t e r = 1 = U ( 0 , 2 π ) and an initial velocity limit V m a x , i l t e r = 1 . Then, the velocities of all particles are recalculated using Equation (6), and the positions of the particles are updated using Equation (16), which is the same as in the original PSO. For the next iteration phase, the angle θ and the maximum velocity of each particle are calculated using Equations (15) and (16), and the iterations are repeated until the maximum number of iterations is reached.
θ i l t e r + 1 = θ i l t e r + | cos ( θ i l t e r ) + sin ( θ i l t e r ) | × 2 π
V I , m a x l t e r + 1 = | cos θ i l t e r | 2 × ( X m a x X m i n )
The phase angle θ for the i t h particle for the next iteration is calculated using an amplified summation of trigonometric functions, as shown in Equation (15). The velocity is updated using a non-parametric Equation (16). One of the preeminent benefits of PPSO compared to some other PSO variants is its ability to enhance the optimization efficiency of PSO even when dealing with higher dimensions of problems. For shaping the control parameters of PSO, planting the phasor angle is an effective, flexible, and trustworthy strategy. Despite these advantages over PSO, PPSO does have several challenges and limitations.

4.1. Slow Convergence Speed

PPSO is an efficient solution, particularly in the case of a small population size. But in the case of a large population size, it suffers slow convergence speed due to a vast number of particles. The selection of the suitable technique for dividing the population into multiple sub-swarms is required for phasor PSO.

4.2. Memory Efficiency

A large population size requires a large memory size to manage its operation. PPSO requires efficient techniques to manage memory for a large population size. PPSO consumes more time and computational resources in managing memory for a large population.

4.3. Multi-Swarm Search Ability

PPSO is effective in a single population but for large-scale optimization, its population needs to be divided into multiple swarms. A suitable strategy is required for dividing the whole population into multiple swarms.

4.4. Less Diversity

PPSO suffers from premature convergence in large populations, resulting in increased time complexity in the adjustment of the population. Insufficient population diversity is also a cause for premature convergence. Population diversity needs to be improved to enhance the exploration and exploitation abilities of PPSO.

4.5. Interaction among Individuals

PPSO requires interaction among all individuals in an algorithm to reduce the space and time complexity of the algorithm. The best individuals sharing their values with the population help to enhance the performance of the algorithm.

5. Improved Competitive Coevolution-Based Multi-Swarm Phasor Particle Swarm Optimization

Different variants of PSO yield efficient and minimized results for large-scale optimization problems. PSO is adaptable, and its suitability for a particular problem depends on the algorithm, meaning that some variants suitable for large populations may not be as suitable for small populations. Conversely, variants that are suitable for small-scale problems may not necessarily be suitable for large-scale optimization problems. In recent years, researchers have focused on the development of PSO algorithms for large-scale optimization and multidimensional problems.
Premature convergence of PSO in global optimum problems is reduced by splitting the whole population into multiple sub-swarms. This practice also improves the population range of the exploration. Coevolutionary algorithms have been successfully applied to different PSO variants to enhance diversity and efficiency. Although enormous progress has been made in evolutionary and coevolutionary optimization like MPSO, efforts to enhance PPSO for multidimensional optimization problem-consuming coevolutionary algorithms have not been made thus far. There are two types of coevolution algorithms: ’competitive coevolution’ and ’cooperative coevolution’. In cooperative coevolution, each individual coevolves with other individuals. The method proposed in the current research involves the enhancement of PPSO using a competitive coevolution technique. This technique empowers PPSO to contribute effective and efficient results in the case of a large population. This technique makes PPSO more memory efficient and reduces the risk of premature convergence.
In ICPPSO, a large population is distributed into multiple sub-swarms. Initially, the current sub-swarm is selected through a uniform distribution, and the competitor sub-swarm is nominated using the roulette wheel selection method. Then, in the competition procedure, the characteristics of both selected sub-swarms merge with the characteristics of all the other sub-swarms and produce two results. The sub-swarm that offers the finest result is the winner and represents the j t h decision variable. The winning sub-swarm replaces the values of the current sub-swarm. Thus, the selected sub-swarm becomes more adaptable because of coevolution, increasing its probability of representation in the next iteration. After the coevolution process, the sub-swarms are combined to form one single population. The pseudocode for ICPPSO is provided below as Algorithm 2.
Algorithm 2 Pseudocode for the ICPPSO algorithm.
Input: N P is the population size, V m i n is the minimum velocity, V m a x is the maximum velocity, X m i n and X m a x are the m i n i m u m and m a x i m u m values of the search space of a given problem, θ is the particle angle, f is the fitness function.
Output Evolved population with gbest at the optimal position to achieve optimal fitness values of the problem.
1:
Parametric initialization (NP, V_max, V_min, X_max, X_min)
2:
Randomly and uniformly initialize particle position X i ( 1 , 2 N P )
3:
Randomly and uniformly initialize particle angle θ i ( 1 , 2 N P )
4:
Evaluate initial fitness f ( X i 1 ) and initial P b e s t and G b e s t
5:
Divide the whole population into multiple sub-swarms.
6:
Initial selection of current and competitor swarms using a uniform distribution
7:
P ( x ) = 1 b a
8:
In subsequent cycles, roulette wheel selection is used for the selection process.
9:
p i = f i j = 1 N f i
10:
Coevolve current and competitor sub-swarms with other sub-swarms.
11:
if sub-swarm particle <current sub-swarm then
12:
    Current sub-swarm =sub-swarm particle.
13:
end if
14:
Competition of current and competitor sub-swarms.
15:
Select best (current sub-swarm, competitor sub-swarm).
16:
Adapt relevant sub-swarm according to competition results.
17:
Merge all sub-swarms into a single population.
18:
Update velocity and position of particles.
19:
Update p b e s t and g b e s t of the whole population.
20:
Update value of θ and Vmax.
21:
Jump to step 5 until termination criteria reached

5.1. Summary of Contributions and Implications of Improved Competitive Multi-Swarm Phasor Particle Swarm Optimization

The contributions and implications of the improved competitive multi-swarm phasor particle swarm optimization are as follows:
  • Initially, PPSO was developed for small population sizes. But in the case of a large population size, it suffers from slow convergence speed due to the vast number of particles. The incorporation of competitive coevolution into PPSO helps divide the population into multiple sub-swarms, which helps improve convergence speed.
  • Simple PPSO is effective in a single population, but for large-scale optimization, its population needs to be divided into multiple swarms. A suitable strategy is required for dividing the whole population into multiple swarms. The incorporation of competitive coevolution into PPSO helps improve multi-swarm search capability.

5.2. Benefits of Improved Competitive Multi-Swarm Phasor Particle Swarm Optimization

  • Simple PPSO suffers from premature convergence in populations, resulting in more time complexity in the adjustment of the population. Less population diversity is also a cause for premature convergence. Population diversity needs to be improved to enhance the exploration and exploitation ability of PPSO. The incorporation of competitive coevolution into PPSO can help improve population diversity.
  • PPSO requires interaction among all individuals of the algorithm to reduce the space and time complexity of the algorithm. The best individuals sharing their values with the population help to increase the performance of the algorithm. The incorporation of competitive coevolution into PPSO helps improve the interaction among individuals.
  • A large population size requires a large memory size to manage its operation. Phasor PSO spends more time and computational load on managing memory for a large population. In the case of a large population size, it suffers from slow convergence speed due to the vast number of particles. The proposed ICPPSO divides the main swam into multiple sub-swarms so the algorithm processes one sub-swarm at a time, which helps save memory.

5.2.1. Parameter Settings

In the proposed ICPPSO algorithm, the parametric adjustments differ from the original PSO. In ICPPSO, the inertia weight is set to ’0’, similar to PPSO. Initially, the control parameters are set based on PPSO, and then some parameters are adjusted for the coevolution process in ICPPSO. A population size of N P = 100 and a dimension size of D = 10 are selected. Six fitness functions ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 ) are used, along with 1000 iterations.
The whole population is divided into five sub-swarms, and the number of individuals in each sub-swarm is assigned randomly from the main population. All sub-swarms are evolved one by one. The pool contains two sub-swarms selected probabilistically, where one sub-swarm is a competitor sub-swarm and the other is a current sub-swarm. Then, two sub-swarms are used in coevolution with all other sub-swarms to form two new sub-swarms. The results generated using these parameters are reported in Table 1.

5.2.2. Phasor Angle and Initial Velocity

In the proposed ICPSO, the inertia weight is set to zero, and the phasor angle θ is used similarly to PPSO. Initially, θ is initialized with a uniform distribution of θ i = U ( 0 , 2 π ) . In later iterations, θ is adjusted according to Equation (17), and the maximum velocity limit is upgraded according to Equation (18).
θ i l t e r + 1 = θ i l t e r + | cos ( θ i l t e r ) + sin ( θ i l t e r ) | × 2 π
V I , m a x l t e r + 1 = | cos θ i l t e r | 2 × ( X m a x X m i n )
The phase angle θ for the i t h particle for the subsequent iteration is calculated using the amplified summation of trigonometric functions, as shown in Equation (17). The velocity is updated using a non-parametric Equation (18).

5.2.3. Uniform Distribution

In the proposed ICPPSO, a uniform distribution is used for the initial values of θ , similar to the original PPSO. ICPPSO uses a uniform distribution in the initial selection of the competitor sub-swarms. In the uniform distribution, each particle has an equal probability in the selection process. The probability density function of the uniform distribution is expressed in Equation (19).
f ( x ) = 1 b a , a x b 0 , x < a o r x > b
where a is the lower limit, b is the upper limit, and x must belong to the range [ a , b ] ; otherwise, 0 is used.

5.2.4. Competition and Coevolution

PPSO lacks population diversity due to the lack of a competitive process during evolution, which results in becoming trapped in local optima. The diversity issue reduces the convergence speed of the PPSO algorithm, ultimately reducing performance. The inclusion of any competitive process during the evolutionary process of PPSO helps achieve convergence more quickly.
The competitive coevolution process increases diversity in the whole population, which helps avoid premature convergence. Diversity is incorporated because the fitness of an individual is estimated following the exchange of information with individuals from other sub-populations in the competitive coevolution process.
ICPSSO uses competition between the competitor and the current sub-swarms while coevolving with the best particle. The competitor sub-swarm is initially selected using a uniform distribution, and in subsequent iterations, it is selected using a roulette wheel selection process. The selection process relies on the probability of each sub-swarm, as expressed in Equation (20)
p i = f i j = 1 N f i
where f i is the probability of each sub-swarm, and j = 1 N f i is the summation of probabilities of all swarms. After competitor sub-swarm selection, the coevolution process begins. Both the selected sub-swarms coevolve with all other sub-swarms, and the sub-swarm with the best solution emerges as the winner, representing itself in the subsequent iterations. Solutions with higher probabilities have a more significant area for selection, making them more adaptable in the selection process.

5.3. Benchmark Functions

Details of the benchmark functions used to generate the experimental results are provided in Table 2, including the name of the function, search space, and equation.
Although a large number of benchmark functions can be found in the existing literature, the choice of these functions depends on the nature of the evaluation, the modality of the model, etc. A large number of existing studies have used benchmark functions f6, f7, and f8 in their research works to reduce the complexity time of performance evaluation. For example, [50,51,52,53,54] employed functions f6, f7, and f8 to evaluate the performance of their proposed models. These studies suggest that functions f5, f6, or f7 suffice for the evaluation and comparison of evolutionary computing algorithms [50,54,55,56]. While there are several fixed-dimension functions available in the literature, our algorithm mainly addresses multidimensional problems. Therefore, we have considered six standard benchmark multimodal functions, as suggested in [50,51,52,53]. The experimental results show that these functions can effectively assess performance and facilitate significance tests of the proposed algorithm in comparison to existing algorithms.

6. Experimental Results and Discussion

In this section, we implement the proposed ICPPSO method using variants of the PSO algorithm, known as PPSO, including the competitive coevolution process in PPSO, to demonstrate the coevolution and sub-swarm effect on the PPSO algorithm. The experimental results of the PPSO, ICPPSO, and FMPSO algorithms are generated using populations of 100, 150, and 200 with dimensions of 10, 30, and 50, respectively. The number of training iterations considered is 1000 for all algorithms during the experiments. The experimental results for the fitness values of the PPSO, ICPPSO, and FMPSO algorithms are reported in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. Table 3 and Table 4 show the results using a population size of 100 and a dimension size of 10.
Although this cannot be deemed the all-time best implementation, the results are more optimized compared to the original PPSO, especially concerning large population diversity and size. The results generally show that initially, the effectiveness of the proposed method was almost the same, but as the population size and admissions size increased, so did the effectiveness of the proposed method. The results columns represent the same six objective functions used for PPSO, FMPSO [46], and ICPPSO (proposed algorithm) to assess the effectiveness of the proposed solution. For all objective functions, the proposed solution performed well and became more effective with the increasing size and diversity of the population.
Convergence graphs for all the employed models are shown in Figure 2a–f. The competitive coevolution process within the sub-swarms played a major role in improving the performance of the proposed solution. If any sub-swarm encountered a premature convergence issue, the coevolution process provided assistance to escape from it. It was observed that PPSO achieved a good convergence speed for smaller population sizes, but for large population sizes and diversity, the chance of premature convergence increased. In the proposed solution, premature convergence was reduced because of the coevolution process. Based on the results obtained, it can be generally concluded that for all six objective functions, the proposed method is more effective and successful in cases of larger population sizes and diversity.
Figure 2 shows the logarithmic convergence graphs for PPSO, ICPPSO, and FMPSO for functions f 1 to f 6 , respectively. The x-axis shows the number of training iterations in multiples of 50, and the y-axis shows the fitness values of G b e s t for the considered algorithms. These convergence graphs were generated using a population size of 100 and a dimension of 10. Figure 2a shows that the performance of all three algorithms was similar in the initial iterations, but as the number of iterations increased, the performance of ICPPSO and FMPSO improved compared to the PPSO algorithm until the final iteration, at which point the performance of ICPPSO and FMPSO became similar. Figure 2b–d clearly show that as the number of iterations increased, the performance of the proposed algorithm gradually improved until the final iteration. Moreover, the performance of FMPSO surpassed that of the PPSO algorithm, as indicated by the results.
Table 5 and Table 6 present the results of the fitness values for the PPSO, ICPPSO, and FMPSO algorithms for the test suite of six functions. A population size of 100 and a dimension size of 30 were considered in generating these results. For all three algorithms, Table 5 shows the results of functions f 1 to f 3 , whereas Table 6 shows the results of functions f 4 to f 6 . It is clear from the results that the performance of ICPPSO was approximately similar for function f 1 , but the proposed algorithm outperformed PPSO and ICPPSO for functions f 2 , f 3 , f 4 , f 5 , and f 6 . So, we can say that the incorporation of competitive coevolution significantly enhances the performance of the PPSO algorithm. On average, ICPPSO achieved improvements of 15%, 20%, 30%, and 35% compared to PPSO and FMPSO in terms of fitness results.
Figure 3 shows the convergence graphs of all algorithms for functions f 1 to f 6 , respectively. These graphs were generated using a population size of 100 and a dimension of 30. Figure 3a shows that for the initial iterations, the performance of all the algorithms was similar; however, the performance of FMPSO and the proposed ICPPSO improved as the number of iterations increased. The performance of the proposed ICPPSO was better for functions f 1 , f 2 , f 5 , and f 6 compared to both FMPSO and PPSO. Moreover, the performance of FMPSO was superior to that of PPSO, as indicated by the results.
Table 7 and Table 8 present the results using a population size of 200 and a dimension size of 50, with the convergence graphs being shown in Figure 4a–f. The experimental results are indicative of the superior performance of the proposed ICPPSO compared to both PPSO and FMPSO.
A population size of NP = 200 and a dimension size of D = 50 were selected. Six fitness functions ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 ) were used, and the experiments were run for 1000 iterations using the PPSO, ICPPSO, and FMPSO algorithms. The convergence results are shown in Figure 4, demonstrating the superior performance of the proposed ICPPSO across all functions.
The experimental results show the fitness values of the PPSO, FMPSO, and ICPPSO algorithms for functions f 1 to f 6 . These results were generated with population sizes of 100, 150, and 200 and dimensions of 10, 30, and 50. The results for all three algorithms were generated using the same parameter settings. It is evident from the results that the performance of ICPPSO surpassed that of PPSO and FMPSO for most of the functions. In the case of D = 30, the performance of the proposed algorithm was superior for functions f 1 , f 2 , f 5 , and f 6 , whereas FMPSO exhibited better performance for functions f 3 and f 4 . However, ICPPSO achieved competitive performance for functions f 3 and f 4 . The convergence graphs also show that ICPPSO exhibited the overall best performance for the considered suite of benchmark functions.
Besides improvements in the convergence of the proposed ICPPSO algorithm, it also helped increase memory efficiency. Memory usage is discussed here in the context of sub-populations rather than the whole population. Since the proposed algorithm divided the whole population into six sub-populations, it increased memory efficiency.
So, it can be concluded from these results that the competitive coevolution process increases diversity in the whole population, which helps avoid premature convergence. Diversity is incorporated because the fitness of an individual is estimated following the exchange of information with individuals from other sub-populations in the competitive coevolution process.

Statistical Analysis

The statistical analysis was performed using the Wilcoxon significance test. A null hypothesis states that there is no significant difference between the performance of ICPPSO and that of the other two algorithms, whereas the alternate hypothesis states that there is a significant difference between the average fitness performance of ICPPSO and the other two algorithms (PPSO and FMPSO). We applied the Wilcoxon significance test by considering a significance level of 0.05, which was compared with the p-value of the used test.
The Wilcoxon significance test was applied to the average fitness results of all functions used in this study by considering ICPPSO with PPSO and FMPSO separately. The results from these tests are reported in Table 9 with a significance level of 0.05 across all experiments. We used the default w i l c o x zero method in the Wilcoxon test. The significance results showed that all experiments of average fitness values have different statistics that demonstrated varying performance, and the p-values were smaller at the 0.05 significance level across all functions. It can be observed in Table 9 that the proposed ICPPSO exhibited a significant difference in performance at the 0.05 significance level compared to the PPSO and FMPSO algorithms.
A significance level of 5% was used to conduct the Wilcoxon significance test on functions f1 to f6. The results of the proposed algorithm were then compared to those of the PPSO and FMPSO algorithms. It can be observed from the results of the significance test that the statistics for all these functions varied in value. However, the p-value for all the cases was 0.0000, leading to the rejection of the null hypothesis and acceptance of the alternate hypothesis. Therefore, it can be deduced that there is a significant difference in the performance of the ICPPSO algorithm compared to the state-of-the-art PPSO and FMPSO algorithms.

7. Conclusions

Different variants of PSO have the ability to solve a diverse variety of problems. Some variants, like MPSO, are effective in managing large-scale populations. The large size of the population is directly proportional to a wide diversity of solutions. In the cooperative approach, the whole population is divided into many sub-swarms, each of which coevolves with the others to form a complete solution. In competitive coevolution, the population is also divided into multiple sub-swarms, and two of these sub-swarms are selected to compete for coevolution; the swarm with the best fitness ultimately earns the right to represent itself. This competitive coevolution process is a great technique for tackling large-scale problems. The hybridization of PPSO with competitive coevolution brings about many effective enhancements for the algorithm. These enhancements include increased efficiency in handling large-scale problems, reduced solution stagnation, decreased risk of premature convergence, and greater population diversity within the algorithm. This study proposed ICPPSO, which incorporates competitive coevolution into PPSO to increase its efficiency for large populations. Six fitness functions were used to demonstrate the enhancements of PPSO. Competitive coevolution not only increased population diversity but also enabled the handling of more complex problems. Mostly, a multi-swarm strategy is used for large-scale optimization problems in PSO. A combination of competitive and cooperative coevolutionary processes was implemented to improve PPSO. Cooperative coevolution is also used independently in large-scale problems of PPSO. A more effective strategy needs to be developed without dividing the population into multiple sub-populations. Experiments were performed by considering six standard benchmark functions and parameter settings. The experimental results were discussed regarding the average fitness values of the PPSO, FMPSO, and ICPPSO algorithms. Additionally, their associated convergence graphs were presented for varying parameters. It can be concluded from the experimental results that the performance of ICPPSO is superior to that of FMPSO and PPSO for the considered benchmark functions and varying parameters. The performance comparison of ICPPSO against PPSO and FMPSO showed the superior performance of the proposed ICPPSO algorithm. The experimental results for the average fitness of the ICPPSO algorithm indicated average improvements of 15%, 20%, 30%, and 35% over the PPSO and FMPSO algorithms. The results of the Wilcoxon statistical significance test for the ICPPSO algorithm rejected the null hypothesis, showing a significant difference in performance compared to the PPSO and FMPSO algorithms at a 0.05 confidence level. Our future work will involve the incorporation of a dynamic and efficient self-adaptive selection process into ICPPSO for large-scale problems.

Author Contributions

Conceptualization, O.A. and Q.A.; Data curation, Q.A. and K.M.; Formal analysis, O.A. and K.M.; Funding acquisition, E.B.T.; Investigation, E.B.T. and J.A.; Methodology, K.M.; Software, J.A.; Supervision, I.A.; Validation, I.A.; Visualization, E.B.T. and J.A.; Writing—original draft, O.A. and Q.A.; Writing—review and editing, I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the European University of the Atlantic.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  2. Zheng, Y.L.; Ma, L.H.; Zhang, L.Y.; Qian, J.X. On the convergence analysis and parameter selection in particle swarm optimization. In Proceedings of the 2003 International Conference on Machine Learning and Cybernetics (IEEE Cat. No. 03EX693), Xi’an, China, 5 November 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 3, pp. 1802–1807. [Google Scholar]
  3. Sana, M.U.; Li, Z.; Javaid, F.; Hanif, M.W.; Ashraf, I. Improved particle swarm optimization based on blockchain mechanism for flexible job shop problem. Clust. Comput. 2021, 26, 2519–2537. [Google Scholar] [CrossRef]
  4. Shi, Y.; Eberhart, R.C. Parameter selection in particle swarm optimization. In Proceedings of the Evolutionary Programming VII: 7th International Conference, EP98, San Diego, CA, USA, 25–27 March 1998; Proceedings 7. Springer: Berlin/Heidelberg, Germany, 1998; pp. 591–600. [Google Scholar]
  5. Bansal, J.C.; Singh, P.; Saraswat, M.; Verma, A.; Jadon, S.S.; Abraham, A. Inertia weight strategies in particle swarm optimization. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 633–640. [Google Scholar]
  6. Tian, M.; Gao, Y.; He, X.; Zhang, Q.; Meng, Y. Differential Evolution with Group-Based Competitive Control Parameter Setting for Numerical Optimization. Mathematics 2023, 11, 3355. [Google Scholar] [CrossRef]
  7. Chen, W.N.; Zhang, J.; Chung, H.S.; Zhong, W.L.; Wu, W.G.; Shi, Y.H. A novel set-based particle swarm optimization method for discrete optimization problems. IEEE Trans. Evol. Comput. 2009, 14, 278–300. [Google Scholar] [CrossRef]
  8. Liao, C.J.; Tseng, C.T.; Luarn, P. A discrete version of particle swarm optimization for flowshop scheduling problems. Comput. Oper. Res. 2007, 34, 3099–3111. [Google Scholar] [CrossRef]
  9. Elbes, M.; Alzubi, S.; Kanan, T.; Al-Fuqaha, A.; Hawashin, B. A survey on particle swarm optimization with emphasis on engineering and network applications. Evol. Intell. 2019, 12, 113–129. [Google Scholar] [CrossRef]
  10. Goh, C.K.; Tan, K.C.; Liu, D.; Chiam, S.C. A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design. Eur. J. Oper. Res. 2010, 202, 42–54. [Google Scholar] [CrossRef]
  11. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.H. An expanded particle swarm optimization based on multi-exemplar and forgetting ability. Inf. Sci. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  12. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), Anchorage, AK, USA, 4–9 May 1998; IEEE: Piscataway, NJ, USA, 1998; pp. 69–73. [Google Scholar]
  13. Eberhart, R.C.; Shi, Y. Tracking and optimizing dynamic systems with particle swarms. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No. 01TH8546), Seoul, Republic of Korea, 27–30 May 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 1, pp. 94–100. [Google Scholar]
  14. Lei, K.; Qiu, Y.; He, Y. A new adaptive well-chosen inertia weight strategy to automatically harmonize global and local search ability in particle swarm optimization. In Proceedings of the 2006 1st International Symposium on Systems and Control in Aerospace and Astronautics, Harbin, China, 19–21 January 2006; IEEE: Piscataway, NJ, USA, 2006; p. 4. [Google Scholar]
  15. Eberhart, R.C.; Shi, Y. Comparing inertia weights and constriction factors in particle swarm optimization. In Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No. 00TH8512), La Jolla, CA, USA, 16–19 July 2000; IEEE: Piscataway, NJ, USA, 2000; Volume 1, pp. 84–88. [Google Scholar]
  16. Yang, C.H.; Lin, Y.D.; Chuang, L.Y.; Chang, H.W. Double-bottom chaotic map particle swarm optimization based on chi-square test to determine gene-gene interactions. BioMed Res. Int. 2014, 2014, 172049. [Google Scholar] [CrossRef] [PubMed]
  17. Arumugam, M.S.; Rao, M. On the improved performances of the particle swarm optimization algorithms with adaptive parameters, cross-over operators and root mean square (RMS) variants for computing optimal control of a class of hybrid systems. Appl. Soft Comput. 2008, 8, 324–336. [Google Scholar] [CrossRef]
  18. Cai, X.; Cui, Y.; Tan, Y. Predicted modified PSO with time-varying accelerator coefficients. Int. J.-Bio-Inspired Comput. 2009, 1, 50–60. [Google Scholar] [CrossRef]
  19. Xing, L.; Li, J.; Cai, Z.; Hou, F. Evolutionary Optimization of Energy Consumption and Makespan of Workflow Execution in Clouds. Mathematics 2023, 11, 2126. [Google Scholar] [CrossRef]
  20. Meng, Z.; Zhong, Y.; Mao, G.; Liang, Y. PSO-sono: A novel PSO variant for single-objective numerical optimization. Inf. Sci. 2022, 586, 176–191. [Google Scholar] [CrossRef]
  21. Lazzus, J.A.; Vega-Jorquera, P.; Lopez-Caraballo, C.H.; Palma-Chilla, L.; Salfate, I. Parameter estimation of a generalized lotka–volterra system using a modified pso algorithm. Appl. Soft Comput. 2020, 96, 106606. [Google Scholar] [CrossRef]
  22. Nabi, S.; Ahmad, M.; Ibrahim, M.; Hamam, H. AdPSO: Adaptive PSO-based task scheduling approach for cloud computing. Sensors 2022, 22, 920. [Google Scholar] [CrossRef] [PubMed]
  23. Eltamaly, A.M. A novel strategy for optimal PSO control parameters determination for PV energy systems. Sustainability 2021, 13, 1008. [Google Scholar] [CrossRef]
  24. Yang, S.; Wang, M. A quantum particle swarm optimization. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753), Portland, OR, USA, 19–23 June 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 1, pp. 320–324. [Google Scholar]
  25. Fallahi, S.; Taghadosi, M. Quantum-behaved particle swarm optimization based on solitons. Sci. Rep. 2022, 12, 13977. [Google Scholar] [CrossRef]
  26. Wang, B.; Zhang, Z.; Song, Y.; Chen, M.; Chu, Y. Application of Quantum Particle Swarm Optimization for task scheduling in Device-Edge-Cloud Cooperative Computing. Eng. Appl. Artif. Intell. 2023, 126, 107020. [Google Scholar] [CrossRef]
  27. Tran, B.; Xue, B.; Zhang, M. Bare-bone particle swarm optimisation for simultaneously discretising and selecting features for high-dimensional classification. In Proceedings of the Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, 30 March–1 April 2016; Proceedings, Part I 19. Springer: Berlin/Heidelberg, Germany, 2016; pp. 701–718. [Google Scholar]
  28. Zhao-Hui, R.; Xiu-Yan, G.; Yuan, Y.; He-Ping, T. Determining the heat transfer coefficient during the continuous casting process using stochastic particle swarm optimization. Case Stud. Therm. Eng. 2021, 28, 101439. [Google Scholar] [CrossRef]
  29. Pornsing, C.; Sodhi, M.S.; Lamond, B.F. Novel self-adaptive particle swarm optimization methods. Soft Comput. 2016, 20, 3579–3593. [Google Scholar] [CrossRef]
  30. Li, G.; Wang, W.; Zhang, W.; Wang, Z.; Tu, H.; You, W. Grid search based multi-population particle swarm optimization algorithm for multimodal multi-objective optimization. Swarm Evol. Comput. 2021, 62, 100843. [Google Scholar] [CrossRef]
  31. Ghasemi, M.; Akbari, E.; Rahimnejad, A.; Razavi, S.E.; Ghavidel, S.; Li, L. Phasor particle swarm optimization: A simple and efficient variant of PSO. Soft Comput. 2019, 23, 9701–9718. [Google Scholar] [CrossRef]
  32. Liu, P.; Liu, J. Multi-leader PSO (MLPSO): A new PSO variant for solving global optimization problems. Appl. Soft Comput. 2017, 61, 256–263. [Google Scholar] [CrossRef]
  33. Figueiredo, E.M.; Ludermir, T.B. Effect of the PSO Topologies on the Performance of the PSO-ELM. In Proceedings of the 2012 Brazilian Symposium on Neural Networks, Curitiba, Brazil, 20–25 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 178–183. [Google Scholar]
  34. Rada-Vilela, J.; Zhang, M.; Seah, W. A performance study on synchronous and asynchronous updates in particle swarm optimization. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, Dublin, Ireland, 12–16 July 2011; pp. 21–28. [Google Scholar]
  35. Liang, P.; Li, W.; Huang, Y. Multi-population Cooperative Particle Swarm Optimization with Covariance Guidance. In Proceedings of the 2022 4th International Conference on Data-driven Optimization of Complex Systems (DOCS), Chengdu, China, 28–30 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
  36. Tian, D.; Shi, Z. MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 2018, 41, 49–68. [Google Scholar] [CrossRef]
  37. Wei, B.; Xia, X.; Yu, F.; Zhang, Y.; Xu, X.; Wu, H.; Gui, L.; He, G. Multiple adaptive strategies based particle swarm optimization algorithm. Swarm Evol. Comput. 2020, 57, 100731. [Google Scholar] [CrossRef]
  38. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.L.; Zhan, Z.H. Triple archives particle swarm optimization. IEEE Trans. Cybern. 2019, 50, 4862–4875. [Google Scholar] [CrossRef]
  39. Ma, X.; Li, X.; Zhang, Q.; Tang, K.; Liang, Z.; Xie, W.; Zhu, Z. A survey on cooperative co-evolutionary algorithms. IEEE Trans. Evol. Comput. 2018, 23, 421–441. [Google Scholar] [CrossRef]
  40. Van den Bergh, F.; Engelbrecht, A.P. A cooperative approach to particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 225–239. [Google Scholar] [CrossRef]
  41. Niu, B.; Zhu, Y.; He, X.; Wu, H. A multi-swarm cooperative particle swarm optimizer. Appl. Math. Comput. 2007, 185, 1050–1062. [Google Scholar] [CrossRef]
  42. Wang, R.Y.; Hsiao, Y.T.; Lee, W.P. A new cooperative particle swarm optimizer with dimension partition and adaptive velocity control. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 103–109. [Google Scholar]
  43. Hu, Y.; Ding, Y.; Hao, K. An immune cooperative particle swarm optimization algorithm for fault-tolerant routing optimization in heterogeneous wireless sensor networks. Math. Probl. Eng. 2012, 2012, 743728. [Google Scholar] [CrossRef]
  44. Wang, L. An improved cooperative particle swarm optimizer. Telecommun. Syst. 2013, 53, 147–154. [Google Scholar] [CrossRef]
  45. Li, W.; Meng, X.; Huang, Y.; Fu, Z.H. Multipopulation cooperative particle swarm optimization with a mixed mutation strategy. Inf. Sci. 2020, 529, 179–196. [Google Scholar] [CrossRef]
  46. Zhao, H.; Chen, Z.G.; Zhan, Z.H.; Kwong, S.; Zhang, J. Multiple populations co-evolutionary particle swarm optimization for multi-objective cardinality constrained portfolio optimization problem. Neurocomputing 2021, 430, 58–70. [Google Scholar] [CrossRef]
  47. Tang, M.; Zhu, W.; Sun, S.; Xin, Y. Mathematical modeling of resource allocation for cognitive radio sensor health monitoring system using coevolutionary quantum-behaved particle swarm optimization. Expert Syst. Appl. 2023, 228, 120388. [Google Scholar] [CrossRef]
  48. Madani, A.; Engelbrecht, A.; Ombuki-Berman, B. Cooperative coevolutionary multi-guide particle swarm optimization algorithm for large-scale multi-objective optimization problems. Swarm Evol. Comput. 2023, 78, 101262. [Google Scholar] [CrossRef]
  49. Kushwaha, N.; Pant, M. Modified particle swarm optimization for multimodal functions and its application. Multimed. Tools Appl. 2019, 78, 23917–23947. [Google Scholar] [CrossRef]
  50. Zhang, Y.; Li, J.; Li, L. A reward population-based differential genetic harmony search algorithm. Algorithms 2022, 15, 23. [Google Scholar] [CrossRef]
  51. Huang, X.; Zeng, X.; Han, R.; Wang, X. An enhanced hybridized artificial bee colony algorithm for optimization problems. IAES Int. J. Artif. Intell. 2019, 8, 87. [Google Scholar] [CrossRef]
  52. Bhattacharya, S.; Tripathi, S.L.; Kamboj, V.K. Design of tunnel FET architectures for low power application using improved Chimp optimizer algorithm. Eng. Comput. 2023, 39, 1415–1458. [Google Scholar] [CrossRef]
  53. Kumari, C.L.; Kamboj, V.K.; Bath, S.; Tripathi, S.L.; Khatri, M.; Sehgal, S. A boosted chimp optimizer for numerical and engineering design optimization challenges. Eng. Comput. 2023, 39, 2463–2514. [Google Scholar] [CrossRef]
  54. Izci, D.; Ekinci, S.; Kayri, M.; Eker, E. A novel improved arithmetic optimization algorithm for optimal design of PID controlled and Bode’s ideal transfer function based automobile cruise control system. Evol. Syst. 2022, 13, 453–468. [Google Scholar] [CrossRef]
  55. Taherdangkoo, M.; Paziresh, M.; Yazdi, M.; Bagheri, M.H. An efficient algorithm for function optimization: Modified stem cells algorithm. Cent. Eur. J. Eng. 2013, 3, 36–50. [Google Scholar] [CrossRef]
  56. Al-Betar, M.A.; Khader, A.T.; Awadallah, M.A.; Alawan, M.H.; Zaqaibeh, B. Cellular harmony search for optimization problems. J. Appl. Math. 2013, 2013, 139464. [Google Scholar] [CrossRef]
Figure 1. Single population topologies [33].
Figure 1. Single population topologies [33].
Mathematics 11 04406 g001
Figure 2. Logarithmic convergence of PPSO, ICPPSO, and FMPSO algorithms, with NP = 100 and D = 10.
Figure 2. Logarithmic convergence of PPSO, ICPPSO, and FMPSO algorithms, with NP = 100 and D = 10.
Mathematics 11 04406 g002
Figure 3. Logarithmic convergence of PPSO, ICPPSO, and FMPSO algorithms, with NP = 200 and D = 30.
Figure 3. Logarithmic convergence of PPSO, ICPPSO, and FMPSO algorithms, with NP = 200 and D = 30.
Mathematics 11 04406 g003
Figure 4. Logarithmic convergence of PPSO, ICPPSO, and FMPSO algorithms.
Figure 4. Logarithmic convergence of PPSO, ICPPSO, and FMPSO algorithms.
Mathematics 11 04406 g004
Table 1. Average fitness values for f 1 to f 6 using varying dimensions and population sizes.
Table 1. Average fitness values for f 1 to f 6 using varying dimensions and population sizes.
DimensionPopulation SizeAlgorithmf1f2f3f4f5f6
10100PPSO1.00 × 10 1 5.10 × 10 1 9.65 × 10 0 2.22 × 10 2 2.29 × 10 1 1.26 × 10 1
ICPPSO2.69 × 10 2 2.69 × 10 1 5.90 × 10 0 2.01 × 10 0 3.90 × 10 1 5.36 × 10 1
FMPSO3.00 × 10 2 4.00 × 10 1 8.07 × 10 0 1.12 × 10 2 1.25 × 10 0 6.47 × 10 0
30150PPSO8.54 × 10 3 6.05 × 10 7 1.70 × 10 1 8.98 × 10 2 5.03 × 10 1 3.83 × 10 1
ICPPSO4.81 × 10 3 2.44 × 10 6 3.70 × 10 1 2.65 × 10 3 2.33 × 10 1 1.06 × 10 1
FMPSO5.94 × 10 3 2.91 × 10 7 1.10 × 10 1 6.84 × 10 2 2.56 × 10 1 1.86 × 10 1
50200PPSO1.01 × 10 5 9.93 × 10 7 7.10 × 10 1 1.92 × 10 4 4.75 × 10 2 4.50 × 10 2
ICPPSO8.12 × 10 4 3.36 × 10 7 6.30 × 10 1 8.30 × 10 3 3.95 × 10 2 2.33 × 10 2
FMPSO8.33 × 10 4 3.74 × 10 7 6.50 × 10 1 7.90 × 10 3 4.00 × 10 2 2.72 × 10 2
Table 2. List of benchmark functions.
Table 2. List of benchmark functions.
FunctionFunction Name (Type)Search RangeEquation
f1Sphere model Function[−100,100] [ f ( x ) = i = 1 n i . x i 2 ]
f2Schwefel’s problem function 1[−100,100] [ f ( x ) = i = 0 n i = 0 n i . x 2 2 ]
f3Schwefel’s problem function 2[−100,100] [ f ( x ) = max 0 x D | x i | ]
f4Rosenbrock function[−2.048,2.048] [ f ( x ) = i = 0 D 1 ( 100 ( x i 2 x i + 1 ) 2 + ( x i 1 ) 2 ) ]
f5Rastrigin function[−5.12,5.12] [ f ( x ) = i = 1 D x i 2 10 cos ( 2 π x ) + 10 ]
f6Griewank function[−5.12,5.12] [ f ( x ) = i = 1 D ( y i 2 10 cos ( 2 π y ) + 10 ) ]
Where y i = x i 2 if | X | < 1 / 2 round( 2 x i 2 ) if | X | > 1 / 2
Table 3. Average fitness values for f 1 to f 3 , with NP = 100 and D = 10.
Table 3. Average fitness values for f 1 to f 3 , with NP = 100 and D = 10.
Iterationsf1f2f3
PPSOICPPSOFMPSOPPSOICPPSOFMPSOPPSOICPPSOFMPSO
06412.006510.746969.907706.787546.748213.2064.0064.2381.13
506412.006510.746512.907078.287546.747314.7046.0140.0544.04
1004861.614768.635389.706902.236374.857011.9044.2236.3344.42
1504861.614768.635035.306374.856312.856507.3036.7836.3337.30
2004861.614768.634821.506374.856312.856349.7036.7836.3337.24
2503454.633022.914167.805132.504144.894675.3036.4733.8036.92
3003454.633022.913258.804316.794144.894656.7036.4733.8036.54
3503454.632169.433416.604316.794144.894635.6036.0233.8035.27
4002802.982169.432522.203840.171365.252603.9036.0229.7732.90
4502802.982169.432508.503523.241365.252450.8034.5029.7732.95
5002678.232169.432424.702207.051365.251965.2034.5029.7732.27
5502189.492004.652396.302207.05863.431543.6031.5229.7731.23
6002189.492004.652109.801181.02863.431028.3031.5227.4429.57
6502189.492004.652176.30867.82304.16586.0031.5227.4430.27
7001865.271829.061999.20867.82304.16465.3031.5227.4429.73
7501865.271102.651497.80547.058.48278.6027.6521.7025.08
8001587.231102.651494.90226.508.48121.4023.4919.0521.49
850556.63540.60548.8078.654.9242.1023.4917.1420.43
900358.9650.2247.2010.522.897.4017.985.9012.47
95050.251.391.016.531.394.0011.565.908.75
100010.020.030.030.510.270.409.655.908.07
Table 4. Average fitness values for f 4 to f 6 , with NP = 100 and D = 10.
Table 4. Average fitness values for f 4 to f 6 , with NP = 100 and D = 10.
Iterationsf4f5f6
PPSOICPPSOFMPSOPPSOICPPSOFMPSOPPSOICPPSOFMPSO
01795.171766.331801.6048.3248.5950.7446.5945.6346.10
501067.911084.631195.7948.3246.4753.7846.5940.9543.82
1001067.911084.631167.2348.3243.9148.9038.5835.9342.57
1501005.211060.891146.0848.3233.4541.6038.5835.9337.29
2001005.211020.891013.6048.3233.4541.3528.3625.7628.16
250943.461020.431010.7845.2333.4540.3428.3620.5626.59
300943.46820.74993.1445.2333.4539.9028.3618.9124.70
350884.86820.74966.8045.2333.4537.1323.7912.8118.45
400884.86664.94820.7243.7220.5032.8023.7910.1817.96
450884.86664.94783.1043.7220.5032.2421.118.0616.65
500884.86336.92612.5143.7220.5032.1321.117.8715.00
550661.53336.92510.1043.7220.5032.1521.117.8714.85
600661.53110.37387.1730.126.3318.3015.884.0613.10
650661.53110.37386.4722.876.3314.7115.883.2912.99
700336.56104.85223.3222.875.6914.3814.293.2112.93
750336.56104.85231.6522.874.7913.9814.290.937.71
800336.56100.55224.5322.874.2413.6814.290.937.63
850336.5641.56194.5022.870.9011.9512.560.936.76
900222.3626.77137.7022.870.906.5012.560.546.71
950222.368.56116.4622.870.553.7012.560.546.55
1000222.362.01112.2022.870.391.2512.560.546.47
Table 5. Average fitness values for f 1 to f 3 , with NP = 150 and D = 30.
Table 5. Average fitness values for f 1 to f 3 , with NP = 150 and D = 30.
Iterationsf1f2f3
PPSOICPPSOFMPSOPPSOICPPSOFMPSOPPSOICPPSOFMPSO
014,05014,02415,099175,000,000164,000,000241,248,283555646
5012,54010,36011,965102,000,00038,400,00087,877,763515344
10011,540930111,13380,000,00010,200,00060,702,189455342
15011,540904610,54680,000,0002,440,00056,845,973434940
20011,540859310,50260,500,0002,440,00047,646,675314920
25085408593858160,500,0002,440,0004,484,8163303717
30085404811678060,500,0002,440,00043,630,988263717
35085404811669760,500,0002,440,00039,809,808183714
40085404811669560,500,0002,440,00038,290,070173714
45085404811669160,500,0002,440,00037,754,894173714
50085404811655960,500,0002,440,00036,716,098173714
55085404811646160,500,0002,440,00035,860,231173714
60085404811628460,500,0002,440,00034,600,925173714
65085404811619460,500,0002,440,00033,636,356173713
70085404811620660,500,0002,440,00032,727,560173713
75085404811612560,500,0002,440,00031,774,173173713
80085404811611460,500,0002,440,00031,351,568173713
85085404811610160,500,0002,440,00031,123,045173713
90085404811608360,500,0002,440,00030,557,021173712
95085404811604960,500,0002,440,00030,098,774173712
100085404811594160,500,0002,440,00029,068,581173711
Table 6. Average fitness values for f 4 to f 6 , with NP = 150 and D = 30.
Table 6. Average fitness values for f 4 to f 6 , with NP = 150 and D = 30.
Iterationsf4f5f6
PPSOICPPSOFMPSOPPSOICPPSOFMPSOPPSOICPPSOFMPSO
0459546562494106100104585769
50338546561920804061565154
100295234661522603856565154
150229026501087503845565053
20020012650983502943564552
25017382650900502941484145
3009662650891502337483242
3509662650835502337483240
4009662650828502335483240
4509662650742502335482740
5009662650740502334482135
5509662650727502333381134
6008982650722502332381128
6508982650715502332381124
7008982650713502330381124
7508982650710502329381124
8008982650707502328381123
8508982650703502328381121
9008982650701502326381121
9508982650699502326381120
10008982650684502326381119
Table 7. Average fitness values for f 1 to f 3 , with NP = 200 and D = 50.
Table 7. Average fitness values for f 1 to f 3 , with NP = 200 and D = 50.
Iterationsf1f2f3
PPSOICPPSOFMPSOPPSOICPPSOFMPSOPPSOICPPSOFMPSO
0124,665123,689132,712159,000,000159,000,000172,801,337868594
50100,63381,22896,95399,300,00033,600,00074,821,574716372
100100,63381,22895,42099,300,00033,600,00071,587,729716370
150100,63381,22894,38899,300,00033,600,00066,450,164716367
200100,63381,22892,93899,300,00033,600,00062,723,479716367
250100,63381,22891,40299,300,00033,600,00060,642,275716367
300100,63381,22890,19299,300,00033,600,00060,094,319716367
350100,63381,22889,45499,300,00033,600,00059,332,208716367
400100,63381,22888,35199,300,00033,600,00057,250,660716366
450100,63381,22888,04999,300,00033,600,000552,869,66716366
500100,63381,22887,92199,300,00033,600,00051,938,099716366
550100,63381,22886,44399,300,00033,600,00050,313,669716366
600100,63381,22885,10699,300,00033,600,00047,799,437716366
650100,63381,22884,20099,300,00033,600,00046,279,179716365
700100,63381,22883,99899,300,00033,600,00043,236,207716365
750100,63381,22883,89899,300,00033,600,00041,989,082716365
800100,63381,22883,77199,300,00033,600,00039,503,950716365
850100,63381,22883,62699,300,00033,600,00038,500,179716365
900100,63381,22883,59399,300,00033,600,00038,039,391716365
950100,63381,22883,45199,300,00033,600,00037,526,643716365
1000100,63381,22883,34599,300,00033,600,00037,416,983716365
Table 8. Average fitness values for f 4 to f 6 , with NP = 200 and D = 50.
Table 8. Average fitness values for f 4 to f 6 , with NP = 200 and D = 50.
Iterationsf4f5f6
PPSOICPPSOFMPSOPPSOICPPSOFMPSOPPSOICPPSOFMPSO
045,00044,96546,373610610630700629665
5027,95026,97532,771475395486450233482
10024,30022,00823,247475395486450233365
15024,30020,47722,509475395437450233362
20024,30020,30521,541475395429450233361
25019,15717,38318,701475395428450233355
30019,15716,66418,107475395423450233345
35019,15716,31118,060475395421450233340
40019,15715,43217,648475395419450233334
45019,15713,67417,016475395416450233333
50019,15712,54116,342475395416450233327
55019,15711,67715,297475395413450233325
60019,15711,56514,123475395412450233323
65019,15711,22113,076475395411450233318
70019,15710,65411,007475395406450233308
75019,15796799901475395406450233300
80019,15795698615475395405450233292
85019,15792728217475395403450233286
90019,15785328181475395402450233282
95019,15784178005475395402450233279
100019,15783047901475395400450233272
Table 9. Wilcoxon significance test results for the proposed algorithm (ICPPSO) vs. the PPSO and FMPSO algorithms for functions f1–f6, using 10D at a significance level of 0.05.
Table 9. Wilcoxon significance test results for the proposed algorithm (ICPPSO) vs. the PPSO and FMPSO algorithms for functions f1–f6, using 10D at a significance level of 0.05.
FunctionWilcoxon Test for PPSO vs. ICPPSOWilcoxon Test for FMPSO vs. ICPPSO
f1Statistics = 17.000; p = 0.000Statistics = 4.000; p = 0.000
f2Statistics = 12.000; p = 0.000Statistics = 10.000; p = 0.000
f3Statistics = 1.000; p = 0.000Statistics = 0.000; p = 0.000
f4Statistics = 18.000; p = 0.000Statistics = 3.000; p = 0.000
f5Statistics = 1.000; p = 0.000Statistics = 0.000; p = 0.000
f6Statistics = 0.000; p = 0.000Statistics = 0.000; p = 0.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, O.; Abbas, Q.; Mahmood, K.; Bautista Thompson, E.; Arambarri, J.; Ashraf, I. Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems. Mathematics 2023, 11, 4406. https://doi.org/10.3390/math11214406

AMA Style

Ali O, Abbas Q, Mahmood K, Bautista Thompson E, Arambarri J, Ashraf I. Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems. Mathematics. 2023; 11(21):4406. https://doi.org/10.3390/math11214406

Chicago/Turabian Style

Ali, Omer, Qamar Abbas, Khalid Mahmood, Ernesto Bautista Thompson, Jon Arambarri, and Imran Ashraf. 2023. "Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems" Mathematics 11, no. 21: 4406. https://doi.org/10.3390/math11214406

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop