Next Article in Journal
Fusing Thermopile Infrared Sensor Data for Single Component Activity Recognition within a Smart Environment
Next Article in Special Issue
Decision Making in Power Distribution System Reconfiguration by Blended Biased and Unbiased Weightage Method
Previous Article in Journal
Acknowledgement to Reviewers of Journal of Sensor and Actuator Networks in 2018
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaotic Quantum Double Delta Swarm Algorithm Using Chebyshev Maps: Theoretical Foundations, Performance Analyses and Convergence Issues

by
Saptarshi Sengupta
*,
Sanchita Basak
and
Richard Alan Peters II
Department of Electrical Engineering and Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN 37235, USA
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2019, 8(1), 9; https://doi.org/10.3390/jsan8010009
Submission received: 8 December 2018 / Revised: 31 December 2018 / Accepted: 11 January 2019 / Published: 17 January 2019
(This article belongs to the Special Issue AI and Quantum Computing for Big Data Analytics)

Abstract

:
The Quantum Double Delta Swarm (QDDS) Algorithm is a networked, fully-connected novel metaheuristic optimization algorithm inspired by the convergence mechanism to the center of potential generated within a single well of a spatially colocated double–delta well setup. It mimics the wave nature of candidate positions in solution spaces and draws upon quantum mechanical interpretations much like other quantum-inspired computational intelligence paradigms. In this work, we introduce a Chebyshev map driven chaotic perturbation in the optimization phase of the algorithm to diversify weights placed on contemporary and historical, socially-optimal agents’ solutions. We follow this up with a characterization of solution quality on a suite of 23 single–objective functions and carry out a comparative analysis with eight other related nature–inspired approaches. By comparing solution quality and successful runs over dynamic solution ranges, insights about the nature of convergence are obtained. A two-tailed t-test establishes the statistical significance of the solution data whereas Cohen’s d and Hedge’s g values provide a measure of effect sizes. We trace the trajectory of the fittest pseudo-agent over all iterations to comment on the dynamics of the system and prove that the proposed algorithm is theoretically globally convergent under the assumptions adopted for proofs of other closely-related random search algorithms.

Graphical Abstract

1. Introduction

With sensor fusion and big data taking center stage in ubiquitous computing niches, the importance of customized, application-specific optimization paradigms is gaining recognition. The computational intelligence community is poised for exponential growth as nature–inspired modeling becomes ever more practicable in the face of abundant computational power. Thus, it is in the interest of exploratory analysis to mimic different natural systems in order to gain adequate understanding of when and on which kinds of problems certain types of biomimicry work particularly well. In this work, a subclass of the modeling paradigm of quantum-mechanical systems involving two Dirac delta potential functions is studied. The technique chosen for the study, viz. the Quantum-Double Delta Swarm (QDDS) algorithm [1], extends the well-known Quantum-behaved Particle Swarm Optimization (QPSO) [2,3,4] using an additional Dirac delta well and imposing motional constraints on particles to effect in convergence to a single well under the influence of both. The particles in QDDS are centrally pulled by an attractive potential field and a recursive Monte Carlo relation is established by collapse of the wave functions around the center of the wells. The methodology has been put forward and tested on select unimodal and multimodal benchmarks in Sengupta et al. [1] and generates promising solution quality when compared to Xi et al. [4]. In this work, we primarily report performance improvements of the QDDS algorithm when its solution update process is influenced by a random perturbation drawn from a Chebyshev chaotic map. The perturbation seeks to diversify the weight array corresponding to the current and socially-optimal agents’ solutions. A detailed performance characterization over twenty-three single–objective, unimodal and multimodal functions of fixed and varying dimensions is carried out. The characterization is repeated for eight other nature–inspired approaches to provide a basis for comparison. The collective potential (cost) quality and precision data from the experimentation provide information on the operating conditions and tradeoffs while the conclusion drawn from a subsequent two-tailed t-test points to the statistical significance of the results at the Θ = 0.05 level. We follow the path of the best performing agent in any iteration across all iterations and critically analyze the dynamical limitations of the algorithm (we assume that one iteration is equivalent to an atomic level function evaluation). Consequently, we also look at the global convergence proof of Random Search algorithms [5] and contend that the proposed algorithm theoretically converges to the global infimum under certain weak assumptions adopted for convergence proofs of similar random search techniques.
The organization of the article is as follows. In Section 2 we walk through a couple of major swarm intelligence paradigms and derive our way through the classical and quantum interpretations in these multiagent systems. In Section 3, we talk about swarm propagation under the influence of a double Dirac delta well and setup its quantum mechanical model. In Section 4 we outline the QDDS and the Chebyshev map driven QDDS (C-QDDS) and provide an involved algorithmic procedure for purposes of reproducibility. Following this, in Section 5 we detail the benchmark optimization problems and graphically illustrate their three–dimensional representations. This is followed in Section 6 by comparative analyses of iterations on the benchmarks and statistical significance tests, taking into account the contribution of effect sizes. The trajectory of the best performing agent in each iteration is tracked along the function contours and the limitations and successes of the approach are identified. In Section 7 critical analyses are presented in light of the findings. In Section 8, a global convergence proof is given for the algorithm, and finally, Section 9 charts out future directions and concludes the paper.

2. Background

The seminal work of Eberhart and Kennedy on flocking induced stochastic, multiparticle swarming resulted in a surge in nature–inspired optimization research, specifically after their highly influential paper Particle Swarm Optimization [6] (PSO) at the International Conference on Neural Networks in Perth, Australia in 1995. This was a landmark moment in the history of swarm intelligence and the following years saw a surge of interest towards the application of nature–inspired methods in approximating engineering problems that were till then either not tractable or simply hard from a computational standpoint. With a steady increase in processor speed and distributed computing abilities over the last couple of decades, gradient-independent approaches have gradually become ever so common. The simple and intuitive equations of motion in PSO are powerful due to simplicity and low computational cost. In this section, a formal transition from the classical model of the canonical PSO to that of quantum-inspired PSO, or the Quantum-behaved PSO (QPSO) is explored. The QPSO model assumes quantum properties in agents and establishes an uncertainty-based position distribution instead of a deterministic one as in the canonical PSO with Newtonian walks. Importantly enough, the QPSO algorithm requires the practitioner to tune only one parameter—the Contraction–Expansion (CE) coefficient—instead of three in PSO. It is worth looking at the dynamics of a PSO-driven swarm to gain a better understanding of singular and double Dirac delta driven quantum swarms, later in the article.

2.1. The Classical PSO

Assume x i = 1 m = [ x 1 x 2 x 3 x m ] is the cohort of m particles of dimensionality n and v i = 1 m = [ v 1 v 2 v 3 v m ] are the velocity vectors which denote incremental changes in their positions in the solution hyperspace. Given this knowledge, a canonical PSO-like formulation may be expressed as:
v ij ( t + 1 ) = w × v ij ( t ) + C 1 × r 1 ( t ) × ( P ij ( t ) x ij ( t ) ) + C 2 × r 2 ( t ) × ( P gj ( t ) x ij ( t ) )
x ij ( t + 1 ) = x ij ( t ) + v ij ( t + 1 )
The parameters w , C 1 , C 2 , r 1 , r 2 are responsible for imparting inertia, cognitive, and social weights as well as random perturbations towards the historical best position P ij ( t ) of any particle (pbest) or P gj ( t ) , that of the swarm as a whole (gbest). The canonical PSO model mimics social information exchange in flocks of birds and schools of fish and is a simple, yet powerful, optimization paradigm. However, it has its limitations: Van den Bergh showed that the algorithm is not guaranteed to converge to globally optimum solutions based on the convergence criteria put forward by Solis and Wet [5]. Clerc and Kennedy demonstrated that the algorithm may converge if particles cluster about a local attractor p lying at the diagonal end of the hyper-rectangle constructed using its cognitive and social velocity vectors [7] (terms 2 and 3 in the right-hand side of equation 1, respectively). Proper tuning of the algorithmic parameters and limits on the velocity are usually required to bring about convergent behavior. The interested reader may look at [8,9,10,11] for detailed operating conditions, possible applications, and troubleshooting of issues when working with the PSO algorithm.

2.2. The Quantum-Behaved PSO

The local attractor p , introduced by Clerc and Kennedy [7] as the point around which particles should flock in order to bring about swarm-wide convergence can be formally expressed using Equation (3) and further simplifications lead to a parameter reduced form in Equation (4). This result is possible of course, after the assumption that c 1 and c 2 may take on any values between 0 and 1.
p ij ( t + 1 ) = c 1 P ij ( t ) + c 2 P gj ( t ) c 1 + c 2
p ij ( t + 1 ) = φ P ij ( t ) + ( 1 φ ) P gj ( t ) , φ ~ U ( 0 , 1 )
Drawing insights from this analysis, Sun et al. in [2,3] outlined algorithmic working of Quantum-behaved Particle Swarm Optimization (QPSO). Instead of point representations of a particle, wave functions were used to provide quantitative sense about its state. The normalized probability density function F of a particle may be put forward as:
F ( X ij ( t + 1 ) ) = 1 L ij ( t ) exp ( 2 | p ij ( t ) X ij ( t + 1 ) | L ij ( t ) )
L is the standard deviation of the distribution: it provides a measure of the dynamic range of the search space of a particle in a specific timestep. Using Monte Carlo method, Equation (5) may be transformed into a recursive, computable closed form expression of particle positions in Equation (6) below:
X ij ( t + 1 ) = p ij ( t ) ± L ij ( t ) 2 ln ( 1 u ) ,   u ~ U ( 0 , 1 )
L is computed as a measure of deviation from the average of all individual personal best particle positions (pbest) in each dimension, i.e., the farther from the average a particle is in a dimension the larger the value of L is for that dimension. This average position has been dubbed the name ‘Mean Best’ or ‘mbest’ and is an agglomerative representation of the swarm as if each member were in its personal best position visited in course of history.
mbest ( t ) = [ mbest 1 ( t ) mbest 2 ( t ) mbest 3 ( t ) mbest j ( t ) ] = [ 1 m i = 1 m p i 1 ( t ) 1 m i = 1 m p i 2 ( t ) 1 m i = 1 m p i 3 ( t ) 1 m i = 1 m p ij ( t ) ]
Therefore, L may be expressed by including the deviation from mbest by Equation (8). The modulation factor β is known as the Contraction–Expansion (CE) Factor and may be adjusted to control the convergence speed of the QPSO algorithm depending on the application.
L ij ( t ) = 2 β | mbest j ( t ) X ij ( t ) |
Subsequently plugging the value of L obtained in Equation (8) into Equation (6), the position update formulation for QPSO may be re–expressed as the following:
X ij ( t + 1 ) = p ij ( t ) ± β | mbest j ( t ) X ij ( t ) | ln ( 1 u ) ,   u ~ U ( 0 , 1 )
Issues such as suboptimal convergence during the application of the QPSO algorithm may arise out of an unbiased selection of weights in the mean best computation as well as the overdependence on the globally best particle in the design of the local attractor p . These issues have also been studied by Xi et al. [12], Sengupta et al. [13], and Dhabal et al. [14]. Xi et al. proposed a differentially weighted mean best [4]: a variant of the QPSO algorithm with a weighted mean best position (WQPSO), which seeks to alleviate the subpar selection of weights in the mean best update process. The underlying assumption is that fitter particles stand to contribute more to the mean best position and that these particles should be accorded larger weights, drawing an analogy with the correlation between cultural uptick and the contributions of the societal, intellectually elite to it [4]. Xi et al. also put forward [12] a local search strategy using a ‘super particle’ with variable contributions from swarm members to overcome the dependence issues during the local attractor design. However, to date no significant study has been undertaken to investigate the effect of more than one spatially co-located basin of attraction around the local attractor, particularly that of multi-well systems. In the next section we seek to derive state expressions of a particle convergent upon one well under the influence of two spatially co-located Dirac delta wells.

3. Swarming under the Influence of Two Delta Potential Wells

The time–independent Schrodinger’s wave equation governs the different interpretations of particle behavior:
[ ћ 2 2 m 2 + V ( r ) ] ψ ( r ) = E ψ ( r )
ψ(r), V(r), m, E, and ћ represent the wave function, the potential function, the reduced mass, the energy of the particle, and reduced Planck’s constant, respectively. However, the wave function ψ ( r ) has no physical significance on its own: its amplitude squared is a measure of the probability of finding a particle. Let us consider a particle under the influence of two delta potential wells experiencing an attractive potential V :
V ( r ) = μ { δ ( r + a ) + δ ( r a ) }
The centers of the two wells are at a and a and μ is a constant indicative of the depth of the wells. Under the assumption that the particle experiences no attractive potential, i.e., V = 0 in regions far away from the centers, the even solution of the time–independent Schrodinger’s equation in Equation (10) takes the following form:
ћ 2 2 m d 2 dr 2 ψ ( r ) = E ψ ( r )
The even solutions to ψ for E < 0 (bound states) in regions R 1 : r ( , a ) , R 2 : r ( a , a )   and   R 3 : r ( a , ) , taking k to be equal to ( 2 mE / ћ ) can be expressed as has been proved in Griffiths [15]:
ψ even ( r ) = { η 1 exp ( kr ) r > a η 2 exp ( kr ) + η 3 exp ( kr ) 0 < r < a η 2 exp ( kr ) + η 3 exp ( kr ) a < r < 0 η 1 exp ( kr ) r < a
The constants η 1 and η 2 described in the above equation are obtained by (a) solving for the continuity of the wave function ψ even at r = a and r = a and (b) solving for the continuity of the derivative of the wave function at r = 0 . Thus, ψ even may be rewritten below as has been in Griffiths [15].
ψ even ( r ) = { η 2 { 1 + exp ( 2 ka ) } exp ( kr ) r > a η 2 { exp ( kr ) + exp ( kr ) } a < r < a η 2 { 1 + exp ( 2 ka ) } exp ( kr ) r < a
The odd wave function ψ odd does not guarantee that a solution would be found [15]. Additionally, the bound state energy in double well setup is lower than that in a single well setup by approximately a factor of ( 1.11 ) 2 1.2321 [16]:
E bs , Double   Well = ( 1.11 ) 2 E bs , Single   Well
To study the motional aspect of a particle its probability density function given by the squared magnitude of ψ even is formally expressed. Further, the claim that there is greater than 50% probability of a particle existing in neighborhood of the center of any of the potential wells (assumed centered at 0) boils down to the following criterion being met [2].
| r | | r | ψ even ( r ) 2 dr > 0.5
| r | and | r | are the dynamic limits of the neighborhood. Doing away with the inequality, Equation (16) is rewritten as:
| r | | r | ψ even ( r ) 2 dr = 0.5 λ ( 1 < λ < 2 )
Equation (17) is the criterion for localization around the center of a potential well in a double Dirac delta well.

4. The Quantum Double Delta Swarm (QDDS) Algorithm

To ease computations, we make the assumption that one of the two potential wells is centered at 0. Then, solving for conditions of localization of the particle in the neighborhood around the center of that well and computing | r | | r | ( ψ ( r ) 2 dr for regions R 2 0 : r ( r , 0 ) and R 2 0 + : r ( 0 , r ) , we obtain the relationship below.
η 2 2 = k λ exp ( 2 kr ) 5 exp ( 2 kr ) + 4 kr + 4
Replacing denominator of the Right Hand Side (R.H.S.) of Equation (18) i.e., ( exp ( 2 kr ) 5 exp ( 2 kr ) + 4 kr + 4 ) as δ, we rewrite it as
δ = exp ( 2 kr ) 5 exp ( 2 kr ) + 4 kr + 4
Equating B 2 in the Left Hand Side (L.H.S.) of Equation (18) for any two consecutive iterations (assuming it is a constant over iterations as it not a function of time) we get Equations (20)–(22):
λ t exp ( 2 kr t ) 5 exp ( 2 kr t ) + 4 kr t + 4 = λ t 1 exp ( 2 kr t 1 ) 5 exp ( 2 kr t 1 ) + 4 kr t 1 + 4
λ t δ t = λ t 1 δ t 1
δ t = Λ . δ t 1 ( 0.5 < Λ < 2 )
Λ is the ratio ( λ t / λ t 1 ) and it may vary between 0.5 to 2 since (1 < λ < 2). To keep a particle constrained within the vicinity of the center of the potential well, it must meet the following condition.
1 2 δ t 1 < δ t < 2 δ t 1
Thus, we find δ t for any iteration by utilizing δ t 1 , obtained in the immediately past iteration. This is done by accounting for a correction factor in the form of the gradient of δ t 1 , multiplied by a learning rate α . The computation of δ t from δ t 1 feeds off the relationship of δ t 1 with δ t 2 while taking the sign of the gradient of δ t 1 into consideration. The procedural details are outlined in Algorithm 1. The learning rate α is chosen as a linearly decreasing, time–varying one (LTV) to help facilitate exploration of the solution space early on in the optimization phase and a gradual shift to exploitation as the process evolves. ν is a small fraction between 0 and 1 chosen at will. However, one empirically successful value is 0.3 and we use it in our computations.
α = ( 1 ν ) ( maximum   number   of   iterations current   iteration maximum   number   of   iterations ) + ν
Upon computing a value for δ t , Equation (19) is solved to retrieve an estimate of r t , which denotes a candidate position as well as a potential solution at the end of that iteration.
r t Solve [ { δ ( exp ( 2 kr ) 5 exp ( 2 kr ) + 4 kr + 4 ) } = 0 ]
We let r t , i.e., a particle’s position in the current iteration, maintain a component towards the best position found so far (gbest) in addition to its current solution obtained from Equation (19). Let ρ denote the component towards the gbest position and (1 − ρ ) be that towards the current solution.
r t new = ρ r t + ( 1 ρ ) r gbest
A cost function is subsequently computed and the corresponding particle position is saved if the cost is lowest among all the historical swarm-wide best costs obtained. This process is repeated until the convergence criteria of choice (solution accuracy threshold, computational expense, memory requirements, success rate, etc.) are met. Figure 1 illustrates the double well potential setup.

4.1. QDDS with Chaotic Chebyshev Map (C-QDDS)

In this section, we use a Chebyshev chaotic map to generate coefficient sequences for driving the belief ρ in the solution update phase of the QDDS algorithm.

4.1.1. Chebyshev Map Driven Solution Update-Motivation

Chaotic metaheuristics necessitate control over the balance between diversification and intensification phases. The diversification phase is carried out by choosing an appropriate chaotic system which performs the extensive search, while the intensification phase is carried out by performing a local search such as gradient descent. It is important that during the initial progression of the search, multiple orbits pass through the vicinity of the local extrema. A large perturbation weight ensures that the strange attractor of one local extremum intersects the strange attractor of any of the other local extrema [17]. To this end, we generate a sorted sequence which acts as a perturbation source of tapering magnitude using the Chebyshev chaotic map using the recursive relation in Equation (27) [18]. There is a relative dearth of studies looking at chaotic perturbations to agent positions to drive them towards socially optimal agent locations. In our approach, we look to facilitate extensive communication among agents by employing larger chaotic weights (diversification phase) in the initial stages and local communication among agents by tapering weights (intensification phase) with the progression of function evaluations. The optimal choice and arrangement of the modulus and sign of the weights generated using the pseudo random number generator or any other method for that matter is subject to change with a change in the application problem and is very much an open question in an exploration–exploitation-based search niche. However, the two properties of ergodicity and non-repetition in chaotic time sequences have proved useful in a number of related classical studies [19,20,21] and are key factors supporting the choice of the perturbation weights in this work. Furthermore, the properties of large Lyapunov coefficient (a measure of chaoticity) and space–filling nature of the Chebyshev sequence serve to help avoid stagnation in local extrema and supplement the choice of the type of chaotic map in the studies in this article. Figure 2 and Figure 3 highlight the generated weights ( ρ Chebyshev ) from a Chebyshev chaotic map over 1000 iterations as well as the corresponding histogram. A schematic of the C-QDDS workflow is shown in Figure 4.
ρ t Chebyshev = cos ( t · cos 1 ( ρ t 1 Chebyshev ) )
Equation (26) subsequently becomes
r t new = ρ t Chebyshev r iter + ( 1 ρ t Chebyshev ) r gbest
Algorithm 1. Quantum Double Delta Swarm Algorithm
Jsan 08 00009 i001

4.1.2. Pseudocode of the C-QDDS Algorithm

In this section, we present the pseudocode of the Chaotic Quantum Double Delta Swarm (C-QDDS) algorithm.

5. Experimental Setup

5.1. Benchmark Functions

A suite of the following 23 optimization benchmark functions (F1–F23) are popularly used to inspect the performance of evolutionary optimization paradigms and have been utilized in this work to characterize the behavior of C-QDDS across unimodal and multimodal function landscapes of fixed and varying dimensionality.

5.2. Parameter Settings

We chose the constant k to be 5 and θ to be the product of a random number drawn from a zero-mean Gaussian distribution with a standard deviation of 0.5 and a factor of the order of 10−3 after sufficient number of trials. The learning rate χ decreases linearly with iterations from 1 to 0.3 according to Equation (24) as an LTV weight [8]. ρ Chebyshev [ 0 , 1 ] is a random number generated using a Chebyshev chaotic map in Equation (27). All experiments were carried out on two Intel(R) Core(TM) i7-5.500U CPUs @ 2.40GHz with 8GB RAM and one Intel(R) Core(TM) i7-2600U CPU @ 3.40GHz with 16GB RAM using MATLAB R2017a. All experiments were independently repeated 30 times in order to account for variability in reported data due to the underlying stochasticity of the metaheuristics used. Clusters from the MATLAB Parallel Computing Cloud were utilized to speed up the benchmarking.

6. Experimental Results

Table 1 introduces some general terms used in context of the algorithms and experimentation, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 provide a detailed listing of the benchmark functions under consideration and Table 9 provide the 3D plots of these functions, whereas Table 10, Table 11 and Table 12 report performances of the C-QDDS algorithm on the test problems stacked against solution qualities obtained using eight other commonly used, recent nature–inspired approaches: (i) Sine Cosine Algorithm (SCA) [22], (ii) Dragon Fly Algorithm (DFA) [23], (iii) Ant Lion Optimization (ALO) [24], (iv) Whale Optimization Algorithm (WOA) [25], (v) Firefly Algorithm (FA) [26], (vi) Quantum-behaved Particle Swarm Optimization (QPSO) [2,3], (vii) Particle Swarm Optimization with Damped Inertia (** PSO-I), and (viii) the canonical Particle Swarm Optimizer (PSO-II) [6]. Each algorithm has been executed for 1000 iterations with 30 independent trials following which their mean, standard deviation, and minimum values are noted. The testing procedures carried out on the 23 functions adhere to the dimensionalities and range constraints specified in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 and the 3D plots of these functions were shown in Table 9. A total of 50 agents have been introduced in the particle pool, out of which only one agent is picked in each iteration. The rationale for choosing one agent instead of many or all from the pool is to investigate the incremental effect of a single agent’s propagation under different nature–inspired dynamical perturbations. The ripple effect caused otherwise, by many sensor reading exchanges among many or all particles, may be delayed when a single particle affects the global pool of particles in one iteration.
** PSO-I utilizes an exponentially decaying inertia weight for exploration–exploitation trade–off.

Test Results on Optimization Problems

7. Analysis of Experimental Results

Table 10, Table 11 and Table 12 report the solution qualities obtained on the suite of test functions F1–F23 followed by Table 13, Table 14, Table 15, Table 16 and Table 17 in which the win/tie/loss counts, average ranks, and results of statistical significance tests such as that of a two-tailed t-test and Cohen’s d and Hedge’s g values are reported. From Table 10, Table 11 and Table 12 one can make the observation that C-QDDS has a distinctive advantage over the other algorithms in terms of quality of optima found, outperforming competitors in unimodal functions as F3–F5, F7, and multimodal ones such as F9–F13. However, solution quality drops for the multimodal functions F14–F23, with the agents getting stuck in local minima. One interpretation is that since communication between particles is limited when only one agent is drawn in an iteration, it will take a considerably large number of iterations for promising regions to be found. Alternatively, because the QDDS mechanism is based on gradient descent, saddle points and valleys introduce stagnation which is difficult to break out of. A two-tailed Student’s t-test with significance level Θ = 0.05 in Table 15 is used to accept or reject the hypothesis that the performance of the C-QDDS algorithm is significant when compared to any of the other approaches. It is observed that in general, C-QDDS provides superior solution quality when applied to problems in Table 10 and Table 11 and that the difference is statistically significant at Θ = 0.05. A measure of the effect sizes is provided in Table 16 through the computation of Cohen’s d values, however to account for the correction Hedge’s g values have also been reported in Table 17.
In Table 18, the number of successful executions against the obtained cost range for any algorithm is demonstrated for all test functions. The horizontal axis represents a value equivalent to the sum of the lowest cost obtained during the 30 runs of an algorithm and a fraction of the cost range i.e., (maximum cost) − (minimum cost), ranging from 0.1 through 1 at intervals of 0.1. The vertical axis is the cumulative number of trials that resulted in solutions with lower cost than the corresponding horizontal axis value. For example, the vertical axis value at the horizontal tick of 0.1 is the number of trials having cost values less than [ ( minimum   cost ) + 0.1 × { ( maximum   cost ) ( minimum   cost ) } ] . These curves are a measure of the variability of the algorithmic solutions within their reported cost ranges and an indicator of how top-heavy or bottom-heavy they are. It is important to note that the cost range for each algorithm is different on every test function execution and as such the curves are merely meant for an intuitive understanding of the variability of the solutions and not intended to provide any basis for comparison among the algorithms. Algorithms having the least standard deviation among the cohort are expected to have a uniform density of solutions in the cost range and as such should follow a roughly linear relationship between the variables in the horizontal and vertical axes. It may be noted that C-QDDS, which roughly follows this relationship, indeed has the least standard deviation in many cases, specifically for 14 of the 23 functions as illustrated in Table 13. This is in congruence with the convergence profiles of QDDS in Figures 1–12 of [1] which point out that QDDS is fairly consistent in its ability to converge to local optima of acceptable quality in certain problems.
Table 19 shows the trajectory evolution of the global best position across the functional iterations for each test case using C-QDDS. For ease of visualization, the contours of the 30-dimensional functions as well as the obtained gbest, i.e., global best solutions are plotted using only the first two dimensions. P 0 represents the initial gbest position and P 1 represents the gbest position upon convergence, given the convergence criteria. The interim gbest position transitions are shown by dotted lines. The solutions to the 23 test problems outlined in the paper are local minima, however the quality of solutions that the C-QDDS and QDDS algorithm provide to some of these problems are markedly better than those reported in some studies in the literature [4,27,28,29]. A logical next-step to improve the optima seeking capability of the QDDS/C-QDDS approach is to introduce a problem-independent random walk in the δ recomputing step of the algorithm instead of using gradient descent.

8. Notes on Convergence of the Algorithm

In this section, we discuss the convergence characteristics of the QDDS algorithm by formulating the algorithmic objective as an optimization problem and proving hypotheses adherence under certain weak assumptions. We start by considering the following problem C .
C : Provided there is a function f from R n to R and that S is a subset of R n , a solution x in S is sought such that x minimizes f on S or finds an acceptable approximation of the minimum of f on S.
A conditioned approach to solving C was proposed by Solis and Wet [5] which we describe below in Algorithm 2. The rest of the proof follows logically from [5] as has also been shown by Van den Bergh in [30] and Sun et al. in [31].
Algorithm 2. A conditioned approach to solving C [5]
1: Initialize x0 in S and set e = 0
2: Generate ξe from the sample space ( R n , B , T e )
3: Update xe+1 = £ (xe, ξe), choose T e + 1 , set e = e + 1 and repeat Step 1.
The mapping £ is the optimization algorithm and should satisfy the following two Hypotheses 1 and 2 in order to theoretically be globally convergent.
Hypothesis 1 (H1).
f ( £ ( x , ξ ) ) f ( x )   and   if   ξ S   then   f ( £ ( x , ξ ) ) f ( ξ )
The sequence f ( x e ) e = 1 generated by £ must monotonically reach a stable value, i.e., the infimum, for the mapping to be a globally convergent one.
Hypothesis 2 (H2).
For any Borel subset A of S with ϑ ( A ) > 0 , it can be proved that
k = 0 { 1 T e ( A ) } = 0
This means that if there exists a subset A of S with positive volume then the chance that upon generating random samples ξe it will repeatedly miss A is zero. Guided random search methods are conditioned, which implies T e depends on x0, x1, …, xe−1 generated in the preceding iterations. Therefore, T e ( A ) is a conditional probability measure.
Definition 1 (D1).
Values close to the essential infimum σ is generated by a set of points having a non-zero ϑ measure.
σ = inf { t : ϑ [ x S | f ( x ) < t ] > 0 }
Definition 2 (D2).
The acceptable solution range N ε , S for P is constructed around the essential infimum σ with step size ε and bounded support S .
N ε , S = { x S   | f ( x ) < σ + ε , σ ( , ) x S | f ( x ) < S , σ   is   infinite
Theorem 1 (T1).
The Global Convergence Theorem for Random Search Algorithms states that when H1 and H2 are satisfied on a measurable subset of R n for a measurable function f, the probability that the conditioned sequence { x e } e = 1 generated by the algorithm lies within the acceptable solution range N ε , S for P is one.
lim e P ( x e N ε ) = 1

Notes on Theoretical Convergence of the QDDS Algorithm

Proposition 1 (P1).
The QDDS algorithm satisfies Hypothesis 1.
Let us consider the solution update stage of the QDDS algorithm. If a new solution is generated such that its fitness is better than the ones recorded so far (global best), it replaces the best solution and is stored in memory.
x i , e + 1 = £ ( x i , e )
update   ( gbest , x e ) = { gbest , fit ( new ) < fit ( best ) x e , otherwise
This implies sequence { fit ( gbest e ) } e = 1 is monotonically decreasing and fit ( £ ( x e , gbest e ) ) fit ( x e ) . So H1 is satisfied.
Proposition 2 (P2).
The QDDS algorithm satisfies Hypothesis 2.
Recall that in Equation (14) the even solutions to the double delta potential well setup take on the form given below.
ψ even ( r ) = { η 2 ( 1 + e 2 ka ) e kr r > a η 2 ( e kr + e kr ) a < r < a η 2 ( 1 + e 2 ka ) e kr r < a
λ ( r i , j , t ) = ψ 2 even , i , j , t ( r ) = { η 2 2 ( e 2 kr + e 2 k ( 2 a r ) + 2 e 4 k ( a r ) ) r > a η 2 2 ( e 2 kr + e 2 kr + 2 ) a < r < a η 2 2 ( e 2 kr + e 2 k ( 2 a + r ) + 2 e 4 k ( a + r ) ) r < a
ψ 2 even , i , j , t ( r ) is a measure of the probability density function of a particle in a particular dimension and integrating it across all dimensions yields the corresponding cumulative distribution function Λ i , t ( Set ) :
Λ i , t ( Set ) = Set { j = 1 d λ ( r i , j , t ) } dr i , 1 , t dr i , 2 , t dr i , D , t
Observe that when r ± , the probability measure ψ 2 even ( r ) goes to zero for r ( , a ) ( a , ) and is bounded for the region a < r < a .
lim r ± λ ( r i , j , t ) = 0
0 < Λ ( Set ) < 1
Λ t ( Set ) = i = 1 n Λ i , t
t = 0 { 1 Λ t ( Set ) } = 0
Thus, H2 is also satisfied. This in turn implies that Theorem 1, which is the global convergence algorithm for random search algorithms, is also satisfied and that £ is globally convergent.

9. Concluding Remarks

The Chaotic Quantum Double Delta Swarm (C-QDDS) Algorithm is an extension of QDDS in a double Dirac delta potential well setup and uses a Chebyshev map driven solution update. The evolutionary behavior of QDDS is simple to follow from an intuitive point of view and guides the particle set towards lower energy configurations under the influence of a spatially co-located attractive double delta potential. The current gradient-dependent formulation is susceptible to getting trapped in suboptimal results because of the use of a gradient descent scheme in the δ t computation phase. However, the algorithm is expensive in terms of time complexity because of a numerical approximation of r t from δ t in the transcendental Equation (25), as also outlined in Algorithm 1. As outlined in [1], the impact of cognition and social attractors, initial tessellation configurations, multiscale topological communication schemes and correction (update) processes need to be studied to provide more insightful comments into the optimization of the workflow itself, specifically the stagnation issue and the high time complexity. In summary, the use of additional chaotic sequences in the heuristic evolution of QDDS based on this commonly used approximation abstraction from quantum physics remains to be further explored in light of the promising results obtained on some problems as highlighted in this study. Further, the snowball effect on the dynamics due to the selection of varying number of agents and selective communication among them over a user-defined number of generations is a thrust area gaining prominence as demonstrated in recent studies [32,33]. As we continue to further our understanding of how emergent properties arise out of simple, local-level interactions at the lowest hierarchical levels, we may expect the evolutionary computation community to increasingly consider scale–free interactions among atomic agents on top of the existing, already rich body of research on biomimicry. The proposed paradigm is well-suited for application in single–objective unimodal/multimodal optimization problems such as those discussed in [8,13,14,34] along the lines of digital filtering, fuzzy-clustering, scheduling, routing, etc. The QDDS and, subsequently, C-QDDS approaches build on a growing corpus of algorithms hybridizing quantum swarm intelligence and global optimization and adds to the existing collection of nature–inspired optimization techniques.

Author Contributions

S.S. put forward the structure and organization of the article and created the content in all sections. S.S. ran the experiments and performed testing and convergence analyses and commented on the chaotic behavior of the algorithm. S.B. carried out precision testing and trajectory tracking analyses. Both S.S. and S.B. contributed to the final version of the article. R.A.P.II commented on the mathematical nature of the chaotic processes and provided critical analyses of assumptions in an advisory capacity. All authors have approved the final version of the article.

Funding

This research received no external funding.

Acknowledgments

This work was made possible by the financial and computing support by the Vanderbilt University Department of EECS.

Conflicts of Interest

The authors declare no conflicts of interests.

References

  1. Sengupta, S.; Basak, S.; Peters, R.A. QDDS: A Novel Quantum Swarm Algorithm Inspired by a Double Dirac Delta Potential. Proceedings of 2018 IEEE Symposium Series on Computational Intelligence. arXiv, 2018; arXiv:1807.02870.in press. [Google Scholar]
  2. Sun, J.; Feng, B.; Xu, W.B. Particle swarm optimization with particles having quantum behavior. In Proceedings of the IEEE Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; pp. 325–331. [Google Scholar]
  3. Sun, J.; Xu, W.B.; Feng, B. A global search strategy of quantum behaved particle swarm optimization. In Proceedings of the 2004 IEEE Conference on Cybernetics and Intelligent Systems, Singapore, 1–3 December 2004; pp. 111–116. [Google Scholar]
  4. Xi, M.; Sun, J.; Xu, W. An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position. Appl. Math. Comput. 2008, 205, 751–759. [Google Scholar] [CrossRef]
  5. Solis, F.J.; Wets, R.J.-B. Minimization by random search techniques. Math. Oper. Res. 1981, 6, 19–30. [Google Scholar] [CrossRef]
  6. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Network, Perth, Australia, 27 November–1 December 1995. [Google Scholar]
  7. Clerc, M.; Kennedy, J. The particle swarm: Explosion, stability, and convergence in a multi-dimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  8. Sengupta, S.; Basak, S.; Peters, R.A., II. Particle Swarm Optimization: A Survey of Historical and Recent Developments with Hybridization Perspectives. Mach. Learn. Knowl. Extr. 2018, 1, 157–191. [Google Scholar] [CrossRef]
  9. Khare, A.; Rangnekar, S. A review of particle swarm optimization and its applications in Solar Photovoltaic system. Appl. Soft Comput. 2013, 13, 2997–3006. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Wang, S.; Ji, G. A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications. Math. Probl. Eng. 2015, 2015, 931256. [Google Scholar] [CrossRef]
  11. Ab Wahab, M.N.; Nefti-Meziani, S.; Atyabi, A. A Comprehensive Review of Swarm Optimization Algorithms. PLoS ONE 2015, 10, e0122827. [Google Scholar] [CrossRef]
  12. Xi, M.; Wu, X.; Sheng, X.; Sun, J.; Xu, W. Improved quantum-behaved particle swarm optimization with local search strategy. J. Algorithms Comput. Technol. 2017, 11, 3–12. [Google Scholar] [CrossRef]
  13. Sengupta, S.; Basak, S. Computationally efficient low-pass FIR filter design using Cuckoo Search with adaptive Levy step size. In Proceedings of the 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), Jalgaon, India, 22–24 December 2016; pp. 324–329. [Google Scholar]
  14. Dhabal, S.; Sengupta, S. Efficient design of high pass FIR filter using quantum-behaved particle swarm optimization with weighted mean best position. In Proceedings of the 2015 Third International Conference on Computer, Communication, Control and Information Technology (C3IT), Hooghly, India, 7–8 February 2015; pp. 1–6. [Google Scholar]
  15. Griffiths, D.J. Introduction to Quantum Mechanics, 2nd ed.; Problem 2.27; Pearson Education: London, UK, 2005. [Google Scholar]
  16. Basak, S. Lecture Notes, P303 (PE03) Quantum Mechanics I, National Institute of Science Education and Research, India. Available online: http://www.niser.ac.in/~sbasak/p303_2010/06.09.pdf (accessed on 10 March 2018).
  17. Tatsumi, K.; Tetsuzo, T. A perturbation based chaotic system exploiting the quasi-newton method for global optimization. Int. J. Bifur. Chaos 2017, 27, 1750047. [Google Scholar] [CrossRef]
  18. He, D.; He, C.; Jiang, L.; Zhu, H.; Hu, G. Chaotic characteristic of a one-dimensional iterative map with infinite collapses. IEEE Trans. Circuits Syst. 2001, 48, 900–906. [Google Scholar]
  19. Coelho, L.; Mariani, V.C. Use of chaotic sequences in a biologically inspired algorithm for engineering design optimization. Expert Syst. Appl. 2008, 34, 1905–1913. [Google Scholar] [CrossRef]
  20. Gandomi, A.; Yang, X.-S.; Talatahari, S.; Alavi, A. Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 89–98. [Google Scholar] [CrossRef]
  21. Wang, G.-G.; Guo, L.; Gandomi, A.H.; Hao, G.-S.; Wang, H. Chaotic krill herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
  22. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  23. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 24, 1053–1073. [Google Scholar] [CrossRef]
  24. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  26. Yang, X.-S. Firefly algorithms for multimodal optimization. In Stochastic Algorithms: Foundations and Applications, SAGA 2009; Lecture Notes in Computer Sciences; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792, pp. 169–178. [Google Scholar]
  27. Xie, Z.; Liu, Q.; Xu, L. A New Quantum-Behaved PSO: Based on Double δ-Potential Wells Model. In Proceedings of the 2016 Chinese Intelligent Systems Conference, CISC 2016, Xiamen, China, 22–23 October 2016; Lecture Notes in Electrical Engineering. Springer: Singapore, 2016; Volume 404, pp. 211–219. [Google Scholar]
  28. Han, P.; Yuan, S.; Wang, D. Thermal System Identification Based on Double Quantum Particle Swarm Optimization. In Proceedings of the Intelligent Computing in Smart Grid and Electrical Vehicles, ICSEE 2014, LSMS 2014, Communications in Computer and Information Science, Shanghai, China, 20–23 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; Volume 463, pp. 125–137. [Google Scholar]
  29. Jia, P.; Duan, S.; Yan, J. An enhanced quantum-behaved particle swarm optimization based on a novel computing way of local attractor. Information 2015, 6, 633–649. [Google Scholar] [CrossRef]
  30. Van den Bergh, F.; Engelbrecht, A. A convergence proof for the particle swarm optimiser. Fundam. Inf. 2010, 105, 341–374. [Google Scholar]
  31. Fang, W.; Sun, J.; Chen, H.; Wu, X. A decentralized quantuminspired particle swarm optimization algorithm with cellular structured population. Inf. Sci. 2016, 330, 19–48. [Google Scholar] [CrossRef]
  32. Gao, Y.; Du, W.; Yan, G. Selectively-informed particle swarm optimization. Sci. Rep. 2015, 5, 9295. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Liu, C.; Du, W.B.; Wang, W.X. Particle Swarm Optimization with Scale-Free Interactions. PLoS ONE 2014, 9, e97822. [Google Scholar] [CrossRef] [PubMed]
  34. Sengupta, S.; Basak, S.; Peters, R.A. Data Clustering using a Hybrid of Fuzzy C-Means and Quantum-behaved Particle Swarm Optimization. In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 137–142. [Google Scholar]
Figure 1. The double well potential setup.
Figure 1. The double well potential setup.
Jsan 08 00009 g001
Figure 2. Generated weights ( ρ Chebyshev ) from a Chebyshev chaotic map over 1000 iterations.
Figure 2. Generated weights ( ρ Chebyshev ) from a Chebyshev chaotic map over 1000 iterations.
Jsan 08 00009 g002
Figure 3. Histogram of generated weights ( ρ Chebyshev ) from the Chebyshev map over 1000 iterations.
Figure 3. Histogram of generated weights ( ρ Chebyshev ) from the Chebyshev map over 1000 iterations.
Jsan 08 00009 g003
Figure 4. Schematic of the Chaotic Quantum Double Delta Swarm (C-QDDS) workflow.
Figure 4. Schematic of the Chaotic Quantum Double Delta Swarm (C-QDDS) workflow.
Jsan 08 00009 g004
Table 1. General terms used in context of the algorithms and experimentation.
Table 1. General terms used in context of the algorithms and experimentation.
TermDiscussion
Some General Terms
Population (X)The collection or ‘swarm’ of agents employed in the search space
Fitness Function (f)A measure of convergence efficiency
Current IterationThe ongoing iteration among a batch of dependent/independent runs
Maximum Iteration CountThe maximum number of times runs are to be performed
Particle Swarm Optimization (PSO)
Position (X)Position value of individual swarm member in multidimensional space
Velocity (v)Velocity values of individual swarm members
Cognitive Accl. Coefficient (C1)Empirically found scale factor of pBest attractor
Social Accl. Coefficient (C2)Empirically found scale factor of gBest attractor
Personal Best (pBest)Position corresponding to historically best fitness for a swarm member
Global Best (gBest)Position corresponding to best fitness over history for swarm members
Inertia Weight Coefficient (ω)Facilitates and modulates exploration in the search space
Cognitive Random Perturbation (r1)Random noise injector in the Personal Best attractor
Social Random Perturbation (r2)Random noise injector in the Global Best attractor
Quantum-behaved Particle Swarm Optimization (QPSO)
Local AttractorSet of local attractors in all dimensions
Characteristic LengthMeasure of scales on which significant variations occur
Contraction–Expansion Parameter (β)Scale factor influencing the convergence speed of QPSO
Mean BestMean of personal bests across all particles, akin to leader election in species
Quantum Double–Delta Swarm Optimization (QDDS)
ρ Component towards the global best position gbest
ψ ( r ) Wave function in the Schrodinger’s equation
ψ even ( r ) Even solutions to Schrodinger’s Equation for Double Delta Potential Well
V ( r ) Potential Function
Λ Limiter
δ iter Characteristic Constraint
ε A small fraction between 0 and 1 chosen at will
R 1 : r ( , a ) Region 1
R 2 : r ( a , a ) Region 2
R 3 : r ( a , ) Region 3
α Learning Rate
ρ iter Chebyshev Component towards global best gbest drawn from Chebyshev map
μ Depth of the wells
a Coordinate of wells
Table 2. Unimodal test functions considered for testing.
Table 2. Unimodal test functions considered for testing.
NumberNameExpressionRangeMin
F1Sphere f ( x ) = i = 1 n x i 2 [−100, 100]f(x*) = 0
F2Schwefel’s Problem 2.22 f ( x ) = i = 1 n | x i | + i = 1 n | x i | [−10, 10]f(x*) = 0
F3Schwefel’s Problem 1.2 f ( x ) = i = 1 n ( j = 1 i x j ) 2 [−100, 100]f(x*) = 0
F4Schwefel’s Problem 2.21 f ( x ) = max i { | x i | , 1 i n } [−100, 100]f(x*) = 0
F5Generalized Rosenbrock’s Function f ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [−n, n]f(x*) = 0
F6Step Function f ( x ) = i = 1 n ( x i + 0.5 ) 2 [−100, 100]f(x*) = 0
F7Quartic Function i.e., Noise f ( x ) = i = 1 n ix 4 + random   [ 0 , 1 ) [−1.28, 1.28]f(x*) = 0
Note: x* Globally optimum argument.
Table 3. Multimodal test functions considered for testing.
Table 3. Multimodal test functions considered for testing.
NumberNameExpressionRangeMin
F8Generalized Schwefel’s Problem 2.26 f ( x ) = i = 1 n ( x i sin ( | x i | ) ) [−500, 500]f(x*) = −12,569.5
F9Generalized Rastrigrin’s Function f ( x ) = An + i = 1 n [ x i 2 Acos ( 2 π x i ) ] , A = 10[−5.12, 5.12]f(x*) = 0
F10Ackley’s Function f ( x ) = 20 exp ( 0.2 1 d i = 1 d x i 2 ) exp ( 1 d i = 1 d cos ( 2 π x i ) ) + 20 + exp ( 1 ) [−32.768, 32.768]f(x*) = 0
F11Generalized Griewank Function f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) [−600, 600]f(x*) = 0
F12Generalized Penalized Function 1 f ( x ) = π d { 10 sin 2 ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) + ( y n 1 ) 2 ] + i = 1 n u ( x i , 10 , 100 , 4 ) } [−50, 50]f(x*) = 0
F13Generalized Penalized Function 2 f ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n 1 ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x 30 ) ] + i = 1 n u ( x i , 5 , 100 , 4 ) }
where u ( x i , 5 , 100 , 4 ) = { k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a
y i = 1 + 1 4 ( x i + 1 )
[−50, 50]f(x*) = 0
Note: x* Globally optimum argument.
Table 4. Multimodal test functions with fixed dimensions considered for testing.
Table 4. Multimodal test functions with fixed dimensions considered for testing.
NumberNameExpressionRangeMin
F14, n = 2Shekel’s Foxholes Function f ( x ) = [ 1 500 + j = 1 25 1 j + i = 1 2 ( x i a ij ) 6 ] 1
where a ij = ( 32 16 0 16 32 32 0 16 32 32 32 32 32 32 16 32 32 32 )
[−65.536, 65.536]f(x*) ≈ 1
F15, n = 4Kowalik’s Function f ( x ) = [ i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2
Coefficients are defined according to Table F15.
[−5, 5]f(x*) ≈ 0.0003075
F16, n = 2Six-Hump Camel-Back Function f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 [−5, 5]f(x*) = −1.0316285
F17, n = 2Branin Function f ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 5 x 1 10 , 0 x 2 15 f(x*) = 0.398
F18, n = 2Goldstein-Price Function f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] [−2, 2]f(x*) = 3
F19, n = 3Hartman’s Family Function 1 f ( x ) = i = 1 4 c i exp [ j = 1 3 a ij ( x j p ij ) 2 ] 0 x j 1 f(x*) = −3.86
F20, n = 6Hartman’s Family Function 2 f ( x ) = i = 1 4 c i exp [ j = 1 6 a ij ( x j p ij ) 2 ]
Coefficients are defined according to Table F20.1 and F20.2 respectively.
0 x j 1 f(x*) = −3.86
F21, n = 4Shekel’s Family Function 1 f ( x ) = i = 1 5 [ ( x a i ) ( x a i ) T + c i ] 1
Coefficients are defined according to Table F21.
0 x j 10 f ( x local ) = 1 c i ,
1 i m
F22, n = 4Shekel’s Family Function 2 f ( x ) = i = 1 7 [ ( x a i ) ( x a i ) T + c i ] 1
Coefficients are defined according to Table F22.
0 x j 10 f ( x local ) = 1 c i ,
1 i m
F23, n = 4Shekel’s Family Function 3 f ( x ) = i = 1 10 [ ( x a i ) ( x a i ) T + c i ] 1
Coefficients are defined according to Table F23.
0 x j 10 f ( x local ) = 1 c i ,
1 i m
Note: x* Globally optimum argument, x*local Locally optimum argument.
Table 5. Coefficients of Kowalik’s Function (F15).
Table 5. Coefficients of Kowalik’s Function (F15).
Index (i) a i a ij 1
10.19570.25
20.19470.5
30.17351
40.16002
50.08444
60.06276
70.04568
80.034210
90.032312
100.023514
110.024616
Table 6. Coefficients of Hartman’s Functions (F19).
Table 6. Coefficients of Hartman’s Functions (F19).
Index (i) a ij ,   j = 1 , 2 , 3 c i p i j ,   j = 1 , 2 , 3
13103010.36890.11700.2673
20.110351.20.46990.43870.7470
33103030.10910.87320.5547
40.110353.20.0381500.57430.8828
Table 7. Coefficients of Hartman’s Functions (F20).
Table 7. Coefficients of Hartman’s Functions (F20).
Index (i) a i j ,   j = 1 , 2 , 3 c i p i j ,   j = 1 , 2 , 3
1103173.51.7810.13120.16960.55690.01240.82830.5886
20.510170.18141.20.23290.41350.83070.37360.10040.9991
333.51.71017830.23480.14150.35220.28830.30470.6650
41780.05100.1143.20.40470.88280.87320.57430.10910.0381
Table 8. Coefficients of Shekel’s Functions (F21–F23).
Table 8. Coefficients of Shekel’s Functions (F21–F23).
Index (i) a i j ,   j = 1 , , 4 c i
144440.1
211110.2
388880.4
466660.4
537370.4
629290.6
755330.3
881810.7
962620.5
1073.673.60.5
Table 9. 3D Surface Plots of the Benchmark Functions F1–F23.
Table 9. 3D Surface Plots of the Benchmark Functions F1–F23.
Jsan 08 00009 i002
Table 10. Solution quality in unimodal functions in Table 2 (30D, 1000 iterations, 30 independent trials).
Table 10. Solution quality in unimodal functions in Table 2 (30D, 1000 iterations, 30 independent trials).
FnStatC-QDDS Chebyshev MapSine Cosine AlgorithmDragon Fly AlgorithmAnt Lion OptimizationWhale OptimizationFirefly AlgorithmQPSOPSO
w = 0.95*w
PSO
No Damping
F1Mean1.1956 × 10−60.0055469.88187.8722 × 10−717.38243.5794 × 1043.0365 × 103109.5486110.3989
Min5.1834 × 10−71.0207 × 10−723.99148.9065 × 10−80.67313.0236 × 1041.3286 × 10339.332942.8825
Std2.8711 × 10−70.0161474.08221.0286 × 10−619.66873.3373 × 103920.481743.312754.7791
F2Mean0.00513.6862 × 10−69.223027.85420.78463.4566 × 10436.41624.22994.4102
Min0.00252.7521 × 10−900.00290.074584.897821.70822.02902.1627
Std9.7281 × 10−48.9681 × 10−65.722642.28560.53031.3595 × 10512.53121.11111.3804
F3Mean1.0265 × 10−43.4383 × 1036.3065 × 103302.37831.0734 × 1054.4017 × 1043.0781 × 1044.0409 × 1033.4218 × 103
Min1.0184 × 10−527.3442310.7558102.77325.0661 × 1043.0021 × 1041.8940 × 1042.2416 × 1031.9223 × 103
Std6.5905 × 10−53.1641 × 1034.7838 × 103167.76874.0661 × 1046.6498 × 1035.9848 × 103994.2550997.4284
F4Mean3.6945 × 10−412.886713.82228.815766.426168.410256.592612.827211.9252
Min1.4162 × 10−41.44774.17752.021217.890462.929632.674410.23029.0857
Std9.8034 × 10−58.16255.51973.080821.51872.64978.29851.57932.0508
F5Mean28.721160.77872.0123 × 104143.96571.5976 × 1037.4584 × 1072.1204 × 1066.0590 × 1035.3377 × 103
Min28.707428.093244.068220.798939.91323.8917 × 1075.0759 × 105655.56181.2610 × 103
Std0.007755.27933.6793 × 104288.18793.0458 × 1032.0606 × 1079.1390 × 1054.2558 × 1032.6303 × 103
F6Mean7.23324.2963488.39426.0117 × 10−730.01583.6216 × 1043.6028 × 103107.5196116.9431
Min6.43893.320117.49788.9390 × 10−80.85312.8838 × 1041.8380 × 10345.937428.9258
Std0.56120.4007309.27956.2634 × 10−744.15952.8434 × 103986.797247.563349.5767
F7Mean0.00370.02890.14910.05410.126536.03351.47610.17370.1749
Min4.9685 × 10−40.00100.01570.02100.017721.13340.38370.06970.0734
Std0.00230.04720.09180.02290.09937.56320.77180.05610.0690
Table 11. Solution quality in multimodal functions in Table 3 (30D, 1000 iterations, 30 independent trials).
Table 11. Solution quality in multimodal functions in Table 3 (30D, 1000 iterations, 30 independent trials).
FnStatC-QDDS Chebyshev MapSine Cosine AlgorithmDragon Fly AlgorithmAnt Lion OptimizerWhale OptimizationFirefly AlgorithmQPSOPSO
w = 0.95*w
PSO
No Damping
F8Mean−602.2041−4.0397 × 103−6.001 × 103−5.5942 × 103−8.5061 × 103−3.8714 × 103−3.3658 × 103−5.1487 × 103−4.8821 × 103
Best−975.5422−4.4739 × 103−8.9104 × 103−8.2843 × 103−1.0768 × 104−4.2603 × 103−5.0298 × 103−7.4208 × 103−6.6643 × 103
Std160.8409214.0523783.7255515.1599895.4642204.0029486.0400766.3330750.3092
F9Mean2.4873 × 10−48.8907124.043279.9945116.4796328.4011248.083157.811457.1125
Best8.2194 × 10−51.0581 × 10−632.169945.76810.4305308.3590177.868119.131827.4985
Std6.3770 × 10−516.228440.473022.293288.034410.105031.950115.164415.0292
F10Mean8.1297 × 10−410.78736.06931.64801.141919.339312.34334.99514.9271
Best5.6777 × 10−43.4267 × 10−58.8818 × 10−161.7296 × 10−40.026518.45159.78353.98742.9208
Std8.8526 × 10−59.69381.91410.95440.99260.27971.84130.62300.7957
F11Mean8.7473 × 10−80.17705.07840.00821.1735316.502633.54462.06692.0604
Best3.5705 × 10−82.2966 × 10−51.17272.5498 × 10−50.9839226.520511.97011.36361.3744
Std2.6504 × 10−80.21954.50980.00930.234033.380612.56050.59890.5366
F12Mean0.0995991.430112.25719.4380642.04041.2629 × 1085.6147 × 1056.43296.5610
Best00.28781.67553.40070.04425.6104 × 1074.0841 × 1041.02662.8742
Std0.26215.4201 × 10313.52183.91213.5039 × 1034.4034 × 1076.9761 × 1052.78823.0003
F13Mean0.01053.19401.5156 × 1040.01332.3405 × 1032.8867 × 1083.5568 × 10638.194539.0369
Best01.87765.66092.7212 × 10−70.38131.3101 × 1086.8216 × 10512.765315.4619
Std0.05762.29226.0811 × 1040.01631.2373 × 1048.1766 × 1072.4393 × 10615.292227.6751
Table 12. Solution quality in multimodal functions in Table 4 (fixed dim, 1000 iterations, 30 trials).
Table 12. Solution quality in multimodal functions in Table 4 (fixed dim, 1000 iterations, 30 trials).
FnStatC-QDDS Chebyshev MapSine Cosine AlgorithmDragon Fly AlgorithmAnt Lion OptimizerWhale OptimizationFirefly AlgorithmQPSOPSO
w = 0.95*w
PSO
No Damping
F14, n = 2Mean3.67711.39491.03111.22994.25241.05192.35612.77863.7082
Best1.00560.99800.99800.99800.99800.99800.99810.99800.9980
Std2.22950.80720.18150.42763.73350.18891.71882.22462.7536
F15, n = 4Mean3.7361 × 10−49.1075 × 10−40.00160.00270.00510.00240.00300.00360.0034
Best3.1068 × 10−43.1549 × 10−44.7829 × 10−44.0518 × 10−43.4820 × 10−40.00117.2169 × 10−43.6642 × 10−43.0858 × 10−4
Std5.0123 × 10−54.2242 × 10−40.00140.00600.00760.00120.00590.00630.0068
F16, n = 2Mean−0.5487−1.0316−1.0316−1.0316−1.0315−1.0295−1.0316−1.0316−1.0316
Best−1.0315−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std0.42751.1863 × 10−51.4229 × 10−63.6950 × 10−143.3613 × 10−40.00301.1009 × 10−48.21082.7251 × 10−13
F17, n = 2Mean0.47210.39830.39790.39790.40690.40020.40000.39790.3979
Best0.39890.39790.39790.39790.39790.39790.39790.39790.3979
Std0.09204.8435 × 10−44.9327 × 10−82.3588 × 10−140.01790.00200.00435.0770 × 10−102.1067 × 10−8
F18, n = 2Mean3.84383333.92783.04023.00073.00003.0000
Best3.00803333.00003.00023.00003.00003.0000
Std0.91285.7657 × 10−68.7817 × 10−71.2869 × 10−135.07520.03970.00171.0155 × 10−115.8511 × 10−11
F19, n = 3Mean−3.6805−3.8547−3.8625−3.8628−3.8246−3.8542−3.8628−3.8628−3.8628
Best−3.8587−3.8626−3.8628−3.8628−3.8628−3.8625−3.8628−3.8628−3.8628
Std0.19420.00168.8455 × 10−47.5193 × 10−150.06570.00661.5043 × 10−55.2841 × 10−119.2140 × 10−11
F20, n = 6Mean−2.2207−2.9961−3.2421−3.2705−3.0966−3.0645−3.2646−3.2625−3.2546
Best−2.7562−3.2911−3.3220−3.3220−3.2610−3.2436−3.3219−3.3220−3.3220
Std0.298840.20600.06700.05990.15350.09110.06050.06050.0599
F21, n = 4Mean−3.1126−4.0962−9.0360−6.7752−6.5291−4.3198−5.8537−5.3955−5.4045
Best−4.5610−5.3343−10.1532−10.1532−9.8465−7.5958−10.1474−10.1532−10.1532
Std0.70901.55191.91302.68241.99881.45993.56513.30293.4897
F22, n = 4Mean−3.2009−3.9949−10.0455−7.2979−6.3611−4.2776−6.7830−5.3236−6.3098
Best−4.5933−7.9241−10.4029−10.4029−10.2432−9.2741−10.3974−10.4029−10.4029
Std0.70982.17741.34223.04402.38521.65273.57833.20003.4602
F23, n = 4Mean−2.3595−4.6650−9.9928−7.1691−5.2592−4.6959−7.5372−7.3175−5.1501
Best−4.2043−7.7259−10.5364−10.5364−10.0617−8.5734−10.5344−10.5364−10.5364
Std0.81831.50381.64393.29262.53891.46473.67783.77533.4033
Table 13. Win/tie/loss count among competitors w.r.t. reported global best.
Table 13. Win/tie/loss count among competitors w.r.t. reported global best.
PerformanceMetricC-QDDS Chebyshev MapSine Cosine AlgorithmDragon Fly AlgorithmAnt Lion OptimizerWhale OptimizationFirefly AlgorithmQPSOPSO
w = 0.95*w
PSO
No Damping
WinMean1013310000
Best612310001
Std1411700000
TieMean023400245
Best049953499
Std000000000
LoseMean132017172223211918
Best171812131820191413
Std92222172323232323
Table 14. Average ranks based on win/tie/loss count among competitors w.r.t. reported global best.
Table 14. Average ranks based on win/tie/loss count among competitors w.r.t. reported global best.
PerformanceMetricC-QDDS Chebyshev MapSine Cosine AlgorithmDragon Fly AlgorithmAnt Lion OptimizerWhale OptimizationFirefly AlgorithmQPSOPSO
w = 0.95*w
PSO
No Damping
WinMean132234444
Best143245554
Std133244444
TieMean543255421
Best532234321
Std111111111
LoseMean152278643
Best451257632
Std133244443
Average RankMean2.33342.333255.6664.6663.3332.666
Best3.33342245.3334.6663.3332.333
Std12.3332.3331.66633332.666
Table 15. Results of two-tailed t-test for C-QDDS vs. competitors.
Table 15. Results of two-tailed t-test for C-QDDS vs. competitors.
AlgorithmC-QDDS vs. SCAC-QDDS vs. DFAC-QDDS vs. ALOC-QDDS vs. WOAC-QDDS vs. FAC-QDDS vs. QPSOC-QDDS vs. PSO-IIC-QDDS vs. PSO-I
Functiont values (tcritical = 2.001717). Null Hypothesis: (µ_CQDDS − µ_Competitor) > 0
F1−1.8707−5.42872.094532−4.84055−5.8.7456−18.0684−13.8533−11.0385
F228.69263−8.82265−3.60728−8.05108−1.39261−15.9148−20.8264−17.4788
F3−5.95188−7.22065−9.87189−14.4592−36.2554−28.1704−22.2608−18.7903
F4−8.64702−13.7155−15.6724−16.9076−141.411−37.3523−44.4852−31.8485
F5−3.17636−2.99135−2.19031−2.8213−19.825−12.7079−7.76098−11.0552
F623.32769−8.5211770.59491−2.82556−69.7487−19.9572−11.5478−12.12
F7−2.92082−8.67254−11.9943−6.77163−26.0926−10.4491−16.5837−13.5823
F870.3200336.9602750.6634547.5837468.9273529.5663531.8023330.54904
F9−3.0006−16.7868−19.6538−7.24698−178.004−42.529−20.8808−20.8139
F10−6.09462−17.3651−9.45308−6.29659−378.696−36.7146−43.9082−33.9102
F11−4.41671−6.1678−4.82933−27.4681−5.1.933−14.6277−18.9028−21.0311
F12−1.00178−4.92371−13.0453−1.00347−15.7087−4.40833−12.3869−11.7511
F13−7.60459−1.36509−0.25619−1.03608−19.337−7.98647−13.6763−7.72376
F145.2718086.479015.904436−0.724626.4263192.5701911.562548−0.04808
F15−6.9162−4.79494−2.12362−3.40618−9.2411−2.4381−2.80494−2.43761
F166.1870236.1870236.1870236.185746.1599656.1870236.1870236.187023
F174.3936274.4175014.4175013.8102364.279564.2877974.4175014.417501
F185.0631935.0631935.063193−0.089224.817425.0589845.0631935.063193
F194.9129785.1330835.1415973.8498544.8962165.1415975.1415975.141597
F2011.7010618.2670418.8657814.2800814.793318.7524718.7147418.58005
F213.15756615.902587.2304048.8234414.0741114.130393.7014333.525209
F221.89894824.691277.1793456.9554443.2787065.3782523.5470724.820763
F237.37591122.768157.764555.9539767.6273147.5269177.0298544.366702
Significantly better912101112131313
Significantly worse1110128101099
Table 16. Cohen’s d-values for C-QDDS v/s competitors.
Table 16. Cohen’s d-values for C-QDDS v/s competitors.
AlgorithmC-QDDS vs. SCAC-QDDS vs. DFAC-QDDS vs. ALOC-QDDS vs. WOAC-QDDS vs. FAC-QDDS vs. QPSOC-QDDS vs. PSO-IIC-QDDS vs. PSO-I
FunctionCohen’s d-values, where d = μ C Q D D S μ C o m p e t i t o r S µ _ C Q D D S 2 + S µ _ C o m p e t i t o r 2 2
F1−0.483−1.40170.5408−1.2498−15.1681−4.6652−3.5769−2.8501
F27.4084−2.278−0.9314−2.0788−0.3596−4.1092−5.3773−4.513
F3−1.5368−1.8644−2.5489−3.7333−9.3611−7.2736−5.7477−4.8516
F4−2.2327−3.5413−4.0466−4.3655−36.5121−9.6443−11.486−8.2233
F5−0.8201−0.7724−0.5655−0.7285−5.1188−3.2812−2.0039−2.8544
F66.0232−2.200218.2275−0.7296−18.009−5.1529−2.9816−3.1294
F7−0.7542−2.2392−3.0969−1.7484−6.7371−2.698−4.2819−3.5069
F818.15669.543113.081212.286117.7977.6348.21137.8877
F9−0.7748−4.3343−5.0746−1.8712−45.9603−10.9809−5.3914−5.3741
F10−1.5736−4.4836−2.4408−1.6258−97.7789−9.4797−11.3371−8.7556
F11−1.1404−1.5925−1.2469−7.0922−13.4091−3.7769−4.8807−5.4302
F12−0.2587−1.2713−3.3683−0.2591−4.056−1.1382−3.1983−3.0341
F13−1.9635−0.3525−0.0661−0.2675−4.9928−2.0621−3.5312−1.9943
F141.36121.67291.5245−0.18711.65930.66360.4034−0.0124
F15−1.7858−1.238−0.5483−0.8795−2.386−0.6295−0.7242−0.6294
F161.59751.59751.59751.59721.59051.59751.59751.5975
F171.13441.14061.14060.98381.1051.10711.14061.1406
F181.30731.30731.3073−0.0231.24391.30621.30731.3073
F191.26851.32541.32760.9941.26421.32761.32761.3276
F203.02124.71654.87113.68713.81964.84194.83214.7973
F210.81534.1061.86692.27821.05191.06650.95570.9102
F220.49036.37531.85371.79590.84661.38870.91581.2447
F231.90455.87872.00481.53731.96941.94341.81511.1275
Table 17. Hedges’ g-values for C-QDDS vs. competitors.
Table 17. Hedges’ g-values for C-QDDS vs. competitors.
AlgorithmC-QDDS vs. SCAC-QDDS vs. DFAC-QDDS vs. ALOC-QDDS vs. WOAC-QDDS vs. FAC-QDDS vs. QPSOC-QDDS vs. PSO-IIC-QDDS vs. PSO-I
FunctionHedge’s g-values, where g = μ C Q D D S μ C o m p e t i t o r ( n 1 1 ) S µ _ C Q D D S 2 + ( n 2 1 ) S µ _ C o m p e t i t o r 2 n 1 + n 2 2
F1−0.6716−1.9490.752−1.7378−21.0904−6.4867−4.9735−3.9629
F210.301−3.1674−1.2951−2.8905−0.5−5.7136−7.4768−6.2751
F3−2.1368−2.5923−3.5441−5.1909−13.0161−10.1135−7.9919−6.7459
F4−3.1044−4.924−5.6266−6.07−5.0.768−13.4099−15.9706−11.434
F5−1.1403−1.074−0.7863−1.0129−7.1174−4.5623−2.7863−3.9689
F68.3749−3.059325.3443−1.0145−25.0405−7.1648−4.1457−4.3513
F7−1.0487−3.1135−4.3061−2.4311−9.3676−3.7514−5.9537−4.8761
F825.245713.269118.188717.083124.745710.614611.417310.9674
F9−1.0773−6.0266−7.0559−2.6018−63.9052−15.2683−7.4964−7.4724
F10−2.188−6.2342−3.3938−2.2606−135.956−13.181−15.7636−12.1742
F11−1.5857−2.2143−1.7337−9.8613−18.6446−5.2516−6.7863−7.5504
F12−0.3597−1.7677−4.6834−0.3603−5.6396−1.5826−4.4471−4.2187
F13−2.7301−0.4901−0.0919−0.3719−6.9422−2.8672−4.9099−2.773
F141.89272.32612.1197−0.26022.30720.92270.5609−0.0172
F15−2.4831−1.7214−0.7624−1.2229−3.3176−0.8753−1.007−0.8751
F162.22122.22122.22122.22082.21152.22122.22122.2212
F171.57731.58591.58591.36791.53641.53941.58591.5859
F181.81771.81771.8177−0.0321.72961.81621.81771.8177
F191.76381.84291.8461.38211.75781.8461.8461.846
F204.20086.5586.7735.12675.31096.73246.71886.6704
F211.13365.70922.59583.16771.46261.48291.32881.2656
F220.68178.86452.57752.49711.17711.93091.27341.7307
F232.64818.1742.78762.13752.73832.70222.52381.5677
Table 18. Precision plots (fraction of successful runs vs. cost range) for the 23 benchmark functions.
Table 18. Precision plots (fraction of successful runs vs. cost range) for the 23 benchmark functions.
Jsan 08 00009 i003
Table 19. Trajectory of the best solutions for the 23 benchmark functions.
Table 19. Trajectory of the best solutions for the 23 benchmark functions.
Jsan 08 00009 i004

Share and Cite

MDPI and ACS Style

Sengupta, S.; Basak, S.; Peters, R.A., II. Chaotic Quantum Double Delta Swarm Algorithm Using Chebyshev Maps: Theoretical Foundations, Performance Analyses and Convergence Issues. J. Sens. Actuator Netw. 2019, 8, 9. https://doi.org/10.3390/jsan8010009

AMA Style

Sengupta S, Basak S, Peters RA II. Chaotic Quantum Double Delta Swarm Algorithm Using Chebyshev Maps: Theoretical Foundations, Performance Analyses and Convergence Issues. Journal of Sensor and Actuator Networks. 2019; 8(1):9. https://doi.org/10.3390/jsan8010009

Chicago/Turabian Style

Sengupta, Saptarshi, Sanchita Basak, and Richard Alan Peters, II. 2019. "Chaotic Quantum Double Delta Swarm Algorithm Using Chebyshev Maps: Theoretical Foundations, Performance Analyses and Convergence Issues" Journal of Sensor and Actuator Networks 8, no. 1: 9. https://doi.org/10.3390/jsan8010009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop