Next Article in Journal
Blockchain-Based Resource Allocation Model in Fog Computing
Next Article in Special Issue
Fully Differential Chopper-Stabilized Multipath Current-Feedback Instrumentation Amplifier with R-2R DAC Offset Adjustment for Resistive Bridge Sensors
Previous Article in Journal
Optimized Resolution-Oriented Many-to-One Intensity Standardization Method for Magnetic Resonance Images
Previous Article in Special Issue
Investigation of Strain of Steel Reinforcement of Modular Flexural Member at Discontinuity Interface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive-Uniform-Experimental-Design-Based Fractional-Order Particle Swarm Optimizer with Non-Linear Time-Varying Evolution

1
Department of Information Engineering and Computer Science, Feng Chia University, 100, Wen-Hwa Road, Taichung 407, Taiwan
2
Department of Electrical Engineering, National Kaohsiung University of Science and Technology, 415 Chien-Kung Road, Kaohsiung 807, Taiwan
3
Department of Automation Engineering, National Formosa University, 64, Wun-Hua Road, Yunlin 632, Taiwan
4
Department of Computer Science, National Pingtung University, 4-18 Min-Sheng Road, Pingtung 900, Taiwan
5
Department of Mechanical Engineering, National Chung-Hsing University, 145 Xing-Da Road, Taichung 402, Taiwan
6
Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, 100 Shi-Quan 1st Road, Kaohsiung 807, Taiwan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2019, 9(24), 5537; https://doi.org/10.3390/app9245537
Submission received: 22 November 2019 / Revised: 9 December 2019 / Accepted: 12 December 2019 / Published: 16 December 2019
(This article belongs to the Special Issue Selected Papers from IMETI 2018)

Abstract

:
An adaptive-uniform-experimental-design-based fractional particle swarm optimizer (AUFPSO) with non-linear time-varying evolution (NTE) is proposed. A particle swarm optimizer (PSO) is an excellent evolutionary algorithm due to its simple structure and rapid convergence. Nevertheless, PSO has notable drawbacks. Although many proposed methods and strategies have enhanced its effectiveness and performance, PSO is limited by its tendency to fall into local optima and its tendency to obtain different solutions in each search (i.e., its weak robustness). Introducing fractional-order calculus in PSO (FPSO) can correct the order of the velocity derivative for each particle, which enhances the diversity and algorithmic effectiveness. This study used NTE of the order of the velocity derivative, inertia weight, cognitive parameter, and social parameter in an FPSO used to search for a global optimal solution. To obtain the best combination of FPSO and NTE, an adaptive uniform experimental design (AUED) method was used to deal with this essential issue. The AUED method integrates a uniform layout with the best combination phase and a stepwise ratio to assist in selecting the best combination for FPSO-NTE. Experimental applications in 15 global numerical optimization problems confirmed that the AUFPSO-NTE had a better performance and robustness than existing PSO-related algorithms.

1. Introduction

Particle swarm optimization (PSO), which was first proposed in 1995 [1], is a swarm intelligence computational technique inspired by animal behavior, such as birds’ flocking. Because of its many advantages, including fast convergence, simple structure, and high accuracy, PSO is used to solve optimization problems, e.g., the travelling salesman problem [2], and problems in many industrial and engineering domains [3], e.g., image processing [4], clouding computing [5], power systems [6,7], the chemical composition of a steel bar [8], and aircraft landing systems [9]. Although PSO is an excellent optimization algorithm, it still has drawbacks that must be addressed. Shi and Eberhart proposed an inertia weight, a linear time-varying inertia weight, and a random inertia weight to enhance the search ability [10,11,12]. Ratnaweera et al. [13] used a time-varying acceleration coefficient, mutation, and a self-organizing hierarchy to prevent the premature convergence in PSO. Chatterjee and Siarry [14] used a non-linear time-varying inertia weight to enhance convergence. Yang et al. [15] proposed an improved approach for velocity and afforded a different inertia weight for each particle. Ko et al. [16] applied the concept of a non-linear time-varying inertia weight, cognitive coefficient, and social coefficient. Ali and Kaelo [17] and Chen and Li [18] proposed strategies for adjusting the self-learning strategy for each particle in a PSO. Huang [19] and Li [20] focused on improving the topology to improve the PSO performance. Tsai et al. [21] divided particles into several groups and allowed particles to be shared among different groups. Chen et al. [22] proposed a method of aging leaders and challengers in PSO to overcome prematurity. Pehlivanoglu [23] proposed a periodic mutation strategy to increase diversity and to avoid falling into the local optimum. Li et al. [24] employed a new weighted particle to improve the PSO. Wang et al. [25] developed a dual-factor strategy for increasing diversity. Cheng and Jin [26] improved the learning model of particles to increase the search effectiveness. Lynn and Suganthan [27] divided the swarm population into two subpopulations to explore or exploit exemplars by using the personal best experiences of the particles or the whole swarm, respectively. Tsai et al. [28] used a sliding-level Taguchi method to enhance the PSO performance. Many other optimization algorithms have been used to increase PSO effectiveness and performance [29,30,31,32]. Although the above improved PSO algorithms do improve the PSO performance, common limitations are their lack of robustness and tendency to fall into the local optimum. Moreover, all the above PSO-based algorithms are based on an integer-order derivative, which cannot be used to describe natural phenomena, because fractional-order derivatives have a great influence in nature for objects that are observed, touched, and controlled by humans [33]. Therefore, some researchers have proposed the use of a fractional-order derivative to improve the PSO performance. A fractional-order particle swarm optimizer (FPSO) was first proposed by Solteiro Pires et al. [34]. Solteiro Pires and coworkers [34,35] later introduced the Grünwald–Letnikov definition of the fractional-order derivative in PSO and modified the velocity update approach. In FPSO, memories of past particle movements influence the next flight. Gao et al. [36], Couceiro and Ghamisi [37], Guo et al. [38], and Hosseini et al. [39] also applied a fraction-order (FO) in PSO and Darwinian PSO (DPSO). However, the most recent studies in FPSO have talked about the applications, such as the direction of arrival estimation for electromagnetic plane waves [40], fractional order filters design [41], fractional fixed-structure H controller design [42], image segmentation [43,44,45,46], image border detection [47], design of a complementary metal-oxide-semiconductor (CMOS) power amplifier [39], design for an electric power transmission system [48], optimization for a pressurized water reactor (PWR) core loading pattern [49], and optimization of extreme learning machine assignments [50]. This paper proposes an FPSO algorithm with a non-linear time-varying evolution (NTE) based on Ko et al. [16] and Solteiro Pires et al. [34]. In the proposed FPSO-NTE algorithm, NTE was used to set the inertia weight, cognitive coefficient, and social coefficient. The proposed FPSO-NTE algorithm used four constant coefficients to influence the order of the velocity derivative, the inertia weight, the cognitive parameter, and the social parameter.
Therefore, an important issue is how to determine the best combination of the four constant coefficients. The many proposed methods for determining the best combination include the trial-and-error method, one-factor-at-a-time experimental design, full-factorial experimental design, Taguchi method [51], and uniform experimental design [52,53]. Although most proposed methods and experimental designs are systematic, they all have limitations. To solve this problem, Tsai et al. [54] proposed a data-driven approach to uniform experimental design (DAUED) for optimizing the parameters of an auto-alignment machine. The DAUED method optimizes parameters by integrating a ten-level uniform layout with the best combination stage and a stepwise ratio. The FPSO-NTE algorithm proposed in this study applied the underlying concept of the DAUED method by using DAUED to select the best combination of four constant coefficients. Because a data-driven approach was not used, DAUED was renamed to an adaptive uniform experimental design (AUED) method for this study. The AUFPSO-NTE algorithm automatically obtained the best combination of four constant coefficients because AUED was integrated in FPSO-NTE.
This paper is organized as follows. FPSO and the proposed FPSO-NTE are briefly described in Section 2. Section 3 describes how AUED was used in FPSO-NTE to search for the best combination. Section 4 presents and discusses the experimental and simulation results. Finally, Section 5 concludes the study.

2. FPSO and FPSO-NTE

Solteiro Pires and coworkers [34,35] first proposed FPSO, which introduced a fractional-order derivative in PSO for rearranging and modifying the order of the velocity derivative. In a fractional-order derivative, common definitions include the Riemann–Liouville definition, the Caputo definition, and the Grünwald–Letnikov definition [55,56,57]. In the FPSO proposed by Solteiro Pires and coworkers [34,35], the order of the velocity derivative is derived from the Grünwald–Letnikov definition, as shown below:
D λ [ x ( t ) ] = lim h 0 [ 1 h λ k = 0 + ( 1 ) k Γ ( λ + 1 ) x ( t k h ) Γ ( k + 1 ) Γ ( λ k + 1 ) ] ,
where D is the derivative operator, λ is the fractional order of the derivative, h is the difference operator, Γ is the Gamma function, and x(t) is the object for the derivative.
For an application in discrete time, Expression (1) can be approximated using:
D λ [ x ( t ) ] = 1 T λ k = 0 r ( 1 ) k Γ ( λ + 1 ) x ( t k T ) Γ ( k + 1 ) Γ ( λ k + 1 ) ,
where T and r are the sampling period and the truncation order, respectively.
The update expression of the velocity for PSO is:
V i ( t + 1 ) = V i ( t ) + c 1 · r 1 ( P i l ( t ) p i ( t ) ) + c 2 · r 2 ( P g ( t ) p i ( t ) ) .
The update expression of the position for PSO is:
P i ( t + 1 ) = p i ( t ) + V i ( t + 1 ) ,
where i = 1, 2, …, S; S is the number of particles; t is the current iteration; λ is the fractional order of the derivative; Vi(t) is the velocity; c1 and c2 are the cognitive and social coefficients, respectively; pi(t) is the current position; Pil(t) and Pg(t) are the position of the self-best solution in the current iteration and the position of the global solution in the current iteration, respectively; and r1 and r2 are random constants between 0 and 1.
In Expression (3), the position term pi of (Pilpi) is rewritten as follows using fractional-order derivation:
D λ [ p i ( t + 1 ) ] = k = 0 r ( 1 ) k Γ ( λ + 1 ) x ( t k ) Γ ( k + 1 ) Γ ( λ k + 1 ) = λ p i ( t ) + λ 2 ( 1 λ ) p i ( t 1 ) + λ 6 ( 1 λ ) ( 2 λ ) p i ( t 2 ) + λ 24 ( 1 λ ) ( 2 λ ) ( 3 λ ) p i ( t 3 ) .
According to Solteiro Pires et al. [34] and Gao et al. [36], r is set to 4 to obtain the best balance of convergence rate and accuracy. Additionally, tests show that an r larger than 4 obtains the same results.
Therefore, the updated expression of the velocity of the FPSO is:
V i ( t + 1 ) = V i ( t ) + c 1 · r 1 [ P i l ( t ) λ p i ( t ) λ 2 ( 1 λ ) p i ( t 1 ) λ 6 ( 1 λ ) ( 2 λ ) p i ( t 2 ) λ 24 ( 1 λ ) ( 2 λ ) ( 3 λ ) p i ( t 3 ) ] + c 2 · r 2 ( P g ( t ) p i ( t ) ) .
To improve the PSO performance and effectiveness, Ko et al. [16] and Cui and Zeng [58] introduced a non-linear NTE in PSO. Gao et al. [36] introduced a non-linear time-varying inertia weight in FO-DPSO. In the current study, NTE is integrated in FPSO to enhance the performance and effectiveness. The resulting FPSO-NTE is expressed as follows:
V i ( t + 1 ) = ω ( t ) · V i ( t ) + c 1 ( t ) · r 1 [ P i l ( t ) λ p i ( t ) λ 2 ( 1 λ ) p i ( t 1 ) λ 6 ( 1 λ ) ( 2 λ ) p i ( t 2 ) λ 24 ( 1 λ ) ( 2 λ ) ( 3 λ ) p i ( t 3 ) ] + c 2 ( t ) · r 2 ( P g ( t ) p i ( t ) )
p i ( t + 1 ) = p i ( t ) + V i ( t + 1 ) ,
where ω(t) is the time-varying inertia weight, ωminω(t) ≤ ωmax, c1(t) is the time-varying cognitive coefficient, c1minc1(t) ≤ c1max, c2(t) is the time-varying social coefficient, and c2minc2(t) ≤ c2max.
ω ( t ) = ω m i n + ( t m a x t t m a x ) α × ( ω m a x ω m i n ) ,
c 1 ( t ) = c 1 m i n + ( t m a x t t m a x ) β × ( c 1 m a x c 1 m i n ) ,
c 2 ( t ) = c 2 m a x + ( t m a x t t m a x ) γ × ( c 2 m i n c 2 m a x ) , ,
where t and tmax are the current iteration and the maximum of iteration, respectively.
The constant coefficient λ influences the algorithmic performance, and coefficients α, β, and γ influence ω, c1, and c2, respectively. The problem is how to obtain the best parameter value for the constant coefficients λ, α, β, and γ. The solution proposed in this study is to use AUED to obtain the best combination of constant coefficient values.

3. AUED-Based FPSO (AUFPSO) with NTE

This study used the AUED method in the proposed FPSO-NTE algorithm to assist in the search for the best combination of the four constant coefficients. The three main steps of AUED method are initializing, performing ten-level uniform layout experiments, and calculating the parameter range for the next ten-level uniform layout. The initialization step includes selection of the parameters to be optimized, their ranges, and the solution accuracy. At this time, a suitable ten-level uniform layout, the output, a stepwise ratio, and the stop condition are also selected. The second main step can then be executed. Ten levels of the parameter range and ten levels of the solution accuracy are defined, and the ten levels are assigned to the ten-level uniform layout. The ten-level uniform layout experiments can then be performed, and the experimental results are recorded. After the ten-level uniform layout experiments in this stage are completed, the best combination obtained according to the output is an optimal or near-optimal value. The range for each parameter is then calculated according to the best combination and the stepwise ratio. The second and third main steps are repeated until the stop condition is met.
For a clear understanding of how the AUED method is applied in the proposed FPSO-NTE algorithm, the detailed steps of the method are given below.
  • Initialization of the AUED method in the proposed FPSO-NTE algorithm
    Step 1.
    Define the experimental parameters as the four constant coefficients λ, α, β, and γ. For each parameter, set the range from 0 to 2, and set the solution accuracy to 0.0001.
    Step 2.
    Set the experimental output as the fitness value.
    Step 3.
    Set the stepwise ratio to 0.8.
    Step 4.
    Select a suitable ten-level uniform layout of U10(104), as shown in Table 1.
    Step 5.
    Repeat steps 1–4 until the objective value is reached or until the fitness value does not obtain a near-objective value in two consecutive ten-level uniform layout experiments.
  • Perform the ten-level uniform layout experiments
    Step 1.
    The ranges for each parameter are divided into ten discrete values according to the chosen ten-level uniform layout of U10(104).
    Step 2.
    Assign ten discrete values of each parameter into the chosen ten-level uniform layout of U10(104), shown as Table 2.
    Step 3.
    Perform this process 15 times for each ten-level uniform layout experiment and record the average as the output.
  • Update the search range for next ten-level uniform experiments
    Step 1.
    For each parameter, calculate the search range according to the best combination in this stage and the stepwise ratio (0.8). The updated Algorithm 1 is shown below.
    Step 2.
    Return to main step B and execute the experimental steps until the stop condition is met.
Algorithm 1
Start
  For K = 1 to PARA_NO
    LT ← LB(K);
    UT UB(K);
    LB(K) BEST(K) − (UTLT) × SWR ÷ 2;
    UB(K) BEST(K) + (UTLT) × SWR ÷ 2;
    If LB(K) < LT
     LB(K) = LT;
    End
    If UB(K) > UT
     UB(K) = UT;
    End
    For I = 1 to EXP_NO
      LEVEL(I, K) LB(K) + ((UB(K) − LB(K))/(EXP_NO − 1) × (I − 1));
    End
  End
End
where PARA_NO is the total number of parameters; LB and UB are the upper and lower bounds for each experimental parameter, respectively; LT and UT are temporary values of LB and UB, respectively; LEVEL is the level value; EXP_NO is the total number of experiments in the uniform layout; BEST is the best parameter value for the best combination obtained by the uniform layout experiments in this stage; and SWR is a stepwise ratio.
The following example demonstrates the use of the updated algorithm when the upper and lower bounds for each parameter are initially set to 2 and 0, respectively, the number of parameters was 4, and the number of experiments was 10. The first ten-level uniform layout experiments indicated that the best combination [P1 P2 P3 P4] was [0.2553 0.4514 0.5556 1.2455]. The stepwise ratio was set to 0.8. Here, the first parameter value was used to explain how to calculate a new range for the next ten-level uniform layout experiments stage.
When the first ten-level uniform layout experiments stage was completed, LB and UB were 0 and 2, respectively. At first, LT and UT were 0 and 2, respectively, due to LB and UB. The best parameter was 0.2553, and the stepwise ratio was 0.8. Therefore, LB = 0.2553 − (2 − 0) × 0.8 ÷ 2 = − 0.5447 and UB = 0.2553 − (2 − 0) × 0.8 ÷ 2 = 1.0553. However, since LB was lower than LT, LB must be corrected to LT, and the new range for the first parameter was 0 to 1.0553. This was because the original range was set to 0 to 2. Therefore, we could know that LB for the next uniform layout experiments must be equal to or more than LT. In the same way, UB for the next uniform layout experiments must have been equal to or less than UT. Next, the range was divided into ten levels, and a discrete value was calculated for each level. The discrete values were calculated as follows: 0 + ((1.0553 − 0) ÷ (10 − 1) × (1 − 1)) = 0 for the first level; 0 + ((1.0553 − 0) ÷ (10 − 1) × (2 − 1)) = 0.1173 for the second level, and so on. The discrete value for the tenth level was calculated as 0 + ((1.0553 − 0) ÷ (10 − 1) × (10 − 1)) = 1.0533. Table 3 shows the levels obtained after the updated algorithm was executed in this instance.

4. Simulation Results and Comparisons

This section presents the results for the 15 global numerical optimization problems in Table 4, which were used for the performance evaluations of the proposed AUFPSO-NTE algorithm. In Example (1), functions f1f7 were used to compare the performance of the proposed AUFPSO-NTE, the PSO-FOV (PSO with the fractional-order velocity) proposed by Solteiro Pires et al. [34], the FPSO improved by Solteiro Pires et al. [35] from PSO-FOV, the modified PSO (MPSO) proposed by Shi and Eberhart [11], and the PSO proposed by Kennedy and Eberhart [1]. In Example (2), functions f5 and f8f11 were used to compare the performance of the proposed AUFPSO-NTE, the FPSO-based algorithms proposed by Gao et al. [36], and the PSO proposed by Gao et al. [36]. In Example (3), functions f5, f8, f10, f11, and f12 were used to compare between the AUFPSO-NTE, the adaptive fractional-order Darwinian PSO (AFO-DPSO) proposed by Guo et al. [38], and the PSO-based algorithms described in Guo et al. [38]. In Example (4), functions f5, f8, f10, f11, and f13f15 were used to compare between the AUFPSO-NTE, the fractional-order Darwinian PSO (FDPSO) proposed by Hosseini et al. [39] and the PSO-based algorithms described in Hosseini et al. [39]. The simulations were run on a Windows 10 personal computer with a core i7-6700M, a 3.4 GHz CPU, and 8 GB RAM.

4.1. Example (1): Proposed AUFPSO-NTE in Comparison with FPSO, PSO-FOV, MPSO, and PSO

The FPSO was the first developed, called PSO-FOV (PSO with the fractional-order velocity), and improved by Solteiro Pires and coworkers [34,35], the MPSO introduced a time-varying inertia weight [11], and PSO was first proposed by Kennedy and Eberhart [1]. All three algorithms were compared with the proposed AUFPSO-NTE.
Table 5 shows the number of dimensions (Dn) that functions f1 to f7 were set to with Dn = 2, 4, 2, 2, 30, 2, and 4, respectively; the number of particles (S) was set to S = 10; and the number of iterations (I) was set to 200. Table 6 shows the parameter settings for the proposed AUFPSO-NTE and for the FPSO, PSO-FOV, MPSO, and PSO. In the AUFPSO-NTE, the minimum weight (ωmin) and maximum weight (ωmax) were set to 0.4 and 0.9, respectively. The minimum cognitive coefficient (c1min) and maximum cognitive coefficient (c1max) were set to 0 and 2, respectively. The minimum social coefficient (c2min) and maximum social coefficient (c2max) were set to 0 and 2, respectively. In the MPSO, ωmin and ωmax were set to 0.4 and 0.9, respectively, and c1 and c2 were both set to 2. For FPSO, PSO-FOV and PSO, c1 and c2 were both set to 2. The maximum velocity (Vmax) was defined as the maximum position minus minimum position, and the minimum velocity (Vmin) was defined as negative Vmax. Table 7 shows the best combinations of the constant coefficients λ, α, β, and γ for each benchmark function of the proposed AUFPSO-NTE. For comparisons, the constant coefficient λ and results are obtained by the algorithms developed by Solteiro Pires and coworkers [34,35]. Table 8 and Table 9 show the constant coefficient λ for each benchmark function of the FPSO [35] and PSO-FOV [34], respectively. In Table 7, Table 8 and Table 9, λ is the fractional order of the derivative. In Table 7, α, β, and γ are coefficients that influence ω, c1, and c2, respectively.
Table 10 shows the performance comparison results for the proposed AUFPSO-NTE and for FPSO [35], PSO-FOV [34], MPSO [11], and PSO [1]. The table shows the best solution, mean, and standard deviation (S.D.) that each algorithm obtained for f1f7 in 30 independent trials. In Example (1), all algorithms except PSO obtained the best solutions for f1, f3, f4, and f6. Only the proposed AUFPSO-NTE and FPSO obtained the best solution for f5 [35]. Additionally, only the proposed AUFPSO-NTE obtained the best solution for f2 and f7. In terms of the mean and S.D., the proposed AUFPSO-NTE outperformed FPSO [35], PSO-FOV [34], MPSO [11], and PSO [1].

4.2. Example (2): Proposed AUFPSO-NTE in Comparison with FVFP-PSO, FP-PSO, FV-PSO, and PSO

Example (2) compared the performance of the proposed AUFPSO-NTE with the standard PSO and with three modifications of PSO proposed by Gao et al. [36]: particle swarm optimization with the fractional-order velocity and the fractional-order position (FVFP-PSO), particle swarm optimization with the fractional-order position (FP-PSO), and particle swarm optimization with the fractional-order velocity (FV-PSO).
Table 11 shows that the number of dimensions for functions f5 and f8f11 was set to Dn = 10, the number of particles was set to S = 30, and the number of iterations was set to N = 300. Table 12 shows the parameter settings for the proposed AUFPSO-NTE, FVFP-PSO, FP-PSO, FV-PSO, and PSO. Parameters ωmin, ωmax, c1min, c1max, c2min, and c2max of the proposed AUFPSO-NTE were set as in Example (1). In FVFP-PSO, FP-PSO, and FV-PSO, the fractional-order velocities and positions were affected by factors ε and ζ. Notably, factors ω, ε, and ζ underwent time-varying evolution in FVFP-PSO, FP-PSO, and FV-PSO. Table 12 also shows that c1 and c2 were 1. In Table 13, λ is the fractional order of the derivative and α, β, and γ are coefficients that influence ω, c1, and c2, respectively. Table 13 shows the best combinations of constant coefficients λ, α, β, and γ for each benchmark function of the proposed AUFPSO-NTE.
Table 14 compares the mean values obtained using the proposed AUFPSO-NTE, FVFP-PSO, FP-PSO, FV-PSO, and PSO developed by Gao et al. [36] for f5 and f8f11 in 100 independent trials. Table 14 shows that the means obtained by the proposed AUFPSO-NTE were better than those obtained using FVFP-PSO, FP-PSO, FV-PSO, and PSO.

4.3. Example (3): Comparison of the Proposed AUFPSO-NTE with AFO-FPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO

Example (3) compares the performance of the proposed AUFPSO-NTE with the AFO-DPSO developed by Guo et al. [38], and the NCPSO (new chaos PSO), FO-DPSO (fractional-order Darwinian PSO), FPSO, APSO (Adaptive PSO), DPSO (Darwinian PSO), HPSO (hybrid PSO), and PSO developed by Guo et al. [38]. The AFO-DPSO introduces fractional-order velocity into a Darwinian PSO algorithm and includes a mutation mechanism to overcome premature convergence, NCPSO improves the chaos-PSO algorithm, FO-DPSO is a fractional-order Darwinian PSO, APSO provides adaptive PSO to enable automatic control of parameters, DPSO is Darwinian particle swarm optimization, and HPSO combines the concept of evolutionary computation with PSO.
In Table 15, the Dn for functions f5, f8, f9, f11, and f12 were set to 30, the number of particles was set to S = 30, and the number of iterations was set to N = 1000. Table 16 and Table 17 show the parameter settings for the proposed AUFPSO-NTE and for the AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO. Parameters ωmin, ωmax, c1min, c1max, c2min, and c2max of the proposed AUFPSO-NTE were set as in Example (1). For AFO-DPSO, ω was 1; c1 and c2 were 1.5 to 2.5, respectively; and δ was 0.05 to 0.1. For NCPSO, ω was 0.7298, and c1 and c2 were both 1.4962. For FO-DPSO and FPSO, ω was 0.9, λ was 0.632, and c1 and c2 were both 1.5. The APSO parameters were automatically set. In DPSO, ω was set to 0.9. In HPSO, ωmin and ωmax were set to 0.2 and 0.8, respectively, and c1 and c2 were both set to 2.5. In PSO, ωmin and ωmax were set to 0.4 and 0.9, respectively, and c1 and c2 were both set to 2. Table 18 illustrates the best combinations of constant coefficients λ, α, β, and γ for each benchmark function of the proposed AUFPSO-NTE. In Table 18, λ is the fractional order of the derivative and α, β, and γ are coefficients which influence ω, c1, and c2, respectively.
In Table 19 and Table 20, the mean values obtained by the AUFPSO-NTE are compared with those obtained using AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO given by Guo et al. [38]. Mean values obtained using each PSO-based algorithm for f5, f8, f9, f11, and f12 in 30 independent trials were recorded. Table 19 and Table 20 show that the means obtained using the proposed AUFPSO-NTE were better than those obtained using AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO. In Guo et al. [38], the performance was evaluated in terms of variance in the optimum (12):
variances   in   optimum =   i = 1 30 | f i f a v g f m a x | 2 ,
where fi is the ith fitness value and favg is the mean fitness value for 30 independent trials. The fmax is a normalization factor. When |fifavg| > 1, fmax is the maximum (|fifavg|); otherwise, fmax = 1.
The variance in the optimum was also used to compare the performance of the proposed AUFPSO-NTE with algorithms given by Guo et al. [38]. Table 21 and Table 22 compare the variance in the optimum obtained by the proposed AUFPSO-NTE and by AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO. A variance in the optimum closer to 0 indicates a better performance. Table 21 and Table 22 indicate that the proposed AUFPSO-NTE had a better variance in the optimum compared to AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO.

4.4. Example (4): Comparison of the Proposed AUFPSO-NTE with HAFPSO, GAPSO, HFPSO, FPSO, and PSO

Example (4) compares the performance of the proposed AUFPSO-NTE with the HAFPSO (hunter-attack fractional-order PSO) developed by Hosseini et al. [38], GAPSO (genetic algorithm-PSO) [59], HFPSO (hybrid firefly algorithm and PSO) [60], FPSO, and PSO developed by Hosseini et al. [38]. The HAFPSO introduces the concept of hunter-attack into the FODPSO and the GAPSO is a compound optimizer that introduces the crossover and mutation strategy of GA into PSO. The HFPSO combines the firefly optimization algorithm and PSO.
Table 23 shows that the number of dimensions for functions f5, f8, f10, f11, and f13f15 was set to Dn = 50, the number of particles was set to S = 30, and the number of iterations was set to N = 1000. Parameters ωmin, ωmax, c1min, c1max, c2min, and c2max of the proposed AUFPSO-NTE were set as in Example (1). Except for the fractional order value (λ) in HAFPSO being 0.6, the parameter settings for the HAFPSO, GAPSO, HFPSO, FPSO, and PSO were not mentioned. In Table 24, λ is the fractional order of the derivative and α, β, and γ are coefficients that influence ω, c1, and c2, respectively. Table 24 shows the best combinations of the constant coefficients λ, α, β, and γ for each benchmark function of the proposed AUFPSO-NTE.
Table 25 shows the performance comparison results obtained using the proposed AUFPSO-NTE, and obtained using HAFPSO [38], GAPSO [59], HFPSO [60], FPSO, and PSO developed by Hosseini et al. [38] for f5, f8, f10, f11, and f13f15 in 100 independent trials. The table shows that the means and S.D. obtained using the proposed AUFPSO-NTE were better than those obtained using HAFPSO [38], GAPSO [59], HFPSO [60], FPSO, and PSO. Overall, the performance of the proposed AUFPSO-NTE was better than others in Example (4).

5. Conclusions

This study applied the AUED method to enhance the performance and effectiveness of the FPSO-NTE algorithm. Use of the AUED method in the proposed FPSO-NTE algorithm enabled a rapid automatic search for the best combination of four constant coefficients, namely λ, α, β, and γ. The major contribution of this paper was the use of AUED to improve the performance of the algorithm and to obtain a robust output. The above experimental and simulation results indicate that the proposed AUFPSO-NTE algorithm achieved a higher solution accuracy compared to the FPSO and PSO-FOV proposed by Solteiro Pires and coworkers [34,35], the FPSO proposed by Gao et al. [36], the AFO-DPSO proposed by Guo et al. [38], the HAFPSO proposed by Hosseini et al. [39], and the PSO-based algorithm described by them [34,35,36,38,39]. Examples demonstrated that the solutions obtained using the proposed AUFPSO-NTE algorithm were more consistent, i.e., more robust. Therefore, we conclude that the proposed AUFPSO-NTE algorithm had a superior effectiveness and performance.

Author Contributions

Formal analysis, P.-Y.Y. and F.-I.C.; funding acquisition, J.-T.T. and J.-H.C.; methodology, J.-T.T. and J.-H.C.; software, P.-Y.Y. and F.-I.C.; supervision, J.-T.T. and J.-H.C.; validation, P.-Y.Y. and F.-I.C.; writing—original draft, P.-Y.Y. and F.-I.C.; writing—review and editing, J.-T.T. and J.-H.C.

Funding

This research was funded in part by the Ministry of Science and Technology, Taiwan, R.O.C., grant numbers MOST 105-2221-E-992-304-MY3, MOST107-2221-E-992-086-MY3, and MOST107-2221-E-153-005-MY2, and in part by the “Intelligent Manufacturing Research Center” (iMRC) from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan, R.O.C. And The APC was funded by MOST107-2221-E-992-086-MY3.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar]
  2. Wang, Y.; Feng, X.Y.; Huang, Y.X.; Pu, D.B.; Zhou, W.G.; Liang, Y.C. A Novel Quantum Swarm Evolutionary Algorithm and Its Applications. Neurocomputing 2007, 70, 633–640. [Google Scholar] [CrossRef]
  3. Rezaee Jordehi, A.; Jasni, J. Parameter Selection in Particle Swarm Optimisation: A Survey. J. Exp. Theor. Artif. Intell. 2013, 25, 527–542. [Google Scholar] [CrossRef]
  4. Chen, Q.; Yang, J.G.; Gou, J. Image Compression Method Using Improved PSO Vector Quantization. In Proceedings of the Advances in Natural Computation: First International Conference, ICNC, Changsha, China, 27–29 August 2005. [Google Scholar]
  5. Navimipour, N.J.; Eslamic, F. Service Allocation in the Cloud Environments Using Multi-Objective Particle Swarm Optimization Algorithm based on Crowding Distance. Swarm Evol. Comput. 2017, 35, 56–64. [Google Scholar]
  6. Kerdphol, T.; Fuji, K.; Mitani, Y.; Watanabe, M.; Qudaih, Y. Optimization of a Battery Energy Storage System Using Particle Swarm Optimization for Stand-Alone Microgrids. Electr. Power Energy Syst. 2016, 81, 32–39. [Google Scholar] [CrossRef]
  7. Naderi, E.; Narimani, H.; Fathi, M.; Narimani, M.R. A Novel Fuzzy Adaptive Configuration of Particle Swarm Optimization to Solve Large-Scale Optimal Reactive Power Dispatch. Appl. Soft Comput. 2017, 53, 441–456. [Google Scholar] [CrossRef]
  8. Chou, P.Y.; Tsai, J.T.; Chou, J.H. Modeling and Optimizing Tensile Strength and Yield Point on a Steel Bar Using an Artificial Neural Network with Taguchi Particle Swarm Optimizer. IEEE Access. 2016, 4, 585–593. [Google Scholar] [CrossRef]
  9. Girish, B.S. An Efficient Hybrid Particle Swarm Optimization Algorithm in a Rolling Horizon Framework for the Aircraft Landing Problem. Appl. Soft Comput. 2016, 44, 200–221. [Google Scholar]
  10. Shi, Y.; Eberhart, R.C. A Modified Particle Swarm Optimizer. In Proceedings of the IEEE International Conference on Evolutionary Computation World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998. [Google Scholar]
  11. Shi, Y.; Eberhart, R.C. Empirical Study of Particle Swarm Optimization. In Proceedings of the Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999. [Google Scholar]
  12. Eberhart, R.C.; Shi, Y. Tracking and Optimizing Dynamic Systems with Particle Swarm. In Proceedings of the Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001. [Google Scholar]
  13. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-Organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  14. Chatterjee, A.; Siarry, P. Nonlinear Inertia Weight Variation for Dynamic Adaptation in Particle Swarm Optimization. Comput. Oper. Res. 2006, 33, 859–871. [Google Scholar] [CrossRef]
  15. Yang, X.; Yuan, J.; Yuan, J. A Modified Particle Swarm Optimizer with Dynamic Adaptation. Appl. Math. Comput. 2007, 189, 1205–1213. [Google Scholar] [CrossRef]
  16. Ko, C.N.; Chang, Y.P.; Wu, C.J. An Orthogonal-Array-based Particle Swarm Optimizer with Nonlinear Time-Varying Evolution. Appl. Math. Comput. 2007, 191, 272–279. [Google Scholar] [CrossRef]
  17. Ali, M.M.; Kaelo, P. Improved Particle Swarm Algorithms for Global Optimization. Appl. Math. Comput. 2008, 196, 578–593. [Google Scholar] [CrossRef]
  18. Chen, X.; Li, Y. On Convergence and Parameter Selection of an Improved Particle Swarm Optimization. Int. J. Control Autom. Syst. 2008, 6, 559–570. [Google Scholar]
  19. Huang, S.R. Survey of Particle Swarm Optimization Algorithm. Comput. Eng. Des. 2009, 30, 1977–1980. [Google Scholar]
  20. Li, X. Niching without Niching Parameters: Particle Swarm Optimization Using a Ring Topology. IEEE Trans. Evol. Comput. 2010, 14, 150–169. [Google Scholar] [CrossRef]
  21. Tsai, H.C.; Tyan, Y.Y.; Wu, Y.W.; Lin, Y.H. Isolated Particle Swarm Optimization with Particle Migration and Global Best Adoption. Eng. Optim. 2012, 44, 1405–1424. [Google Scholar] [CrossRef]
  22. Chen, W.N.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.H.; Chung, H.S.H.; Li, Y.; Shi, Y.H. Particle Swarm Optimization with an Aging Leader and Challengers. IEEE Trans. Evol. Comput. 2013, 17, 241–258. [Google Scholar] [CrossRef]
  23. Pehlivanoglu, Y.V. A New Particle Swarm Optimization Method Enhanced with a Periodic Mutation Strategy and Neural Networks. IEEE Trans. Evol. Comput. 2013, 17, 436–452. [Google Scholar] [CrossRef]
  24. Li, N.J.; Wang, W.J.; Hsu, C.C.; Chang, J.W.; Chang, J.W. Enhanced Particle Swarm Optimizer Incorporating a Weighted Particle. Neurocomputing 2014, 124, 218–227. [Google Scholar] [CrossRef]
  25. Wang, L.; Yang, B.; Li, Y.; Zhang, N. A Novel Improvement of Particle Swarm Optimization Using Dual Factors Strategy. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014. [Google Scholar]
  26. Cheng, R.; Jin, Y. A Social Learning Particle Swarm Optimization Algorithm for Scalable Optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  27. Lynn, N.; Suganthan, P.N. Heterogeneous Comprehensive Learning Particle Swarm Optimization with Enhanced Exploration and Exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  28. Tsai, J.T.; Chou, P.Y.; Chou, J.H. Color Filter Polishing Optimization Using ANFIS with Sliding-Level Particle Swarm Optimizer. IEEE Trans. Syst. Man Cybern. Syst. 2017. [Google Scholar] [CrossRef]
  29. Kao, Y.T.; Zahara, E. A Hybrid Genetic Algorithm and Particle Swarm Optimization for Multimodal Functions. Appl. Soft Comput. 2008, 8, 849–857. [Google Scholar] [CrossRef]
  30. Xin, B.; Chen, J. A Survey and Taxonomy on Hybrid Algorithms based on Particle Swarm Optimization and Differential Evolution. J. Syst. Sci. Math. Sci. 2011, 31, 1130–1150. [Google Scholar]
  31. Noel, M.M. A New Gradient Based Particle Swarm Optimization Algorithm for Accurate Computation of Global Minimum. Appl. Soft Comput. 2012, 12, 353–359. [Google Scholar] [CrossRef]
  32. Sun, Y.; Zhang, L.; Gu, X. A Hybrid Co-Evolutionary Cultural Algorithm based on Particle Swarm Optimization for Solving Global Optimization Problems. Neurocomputing 2012, 98, 76–89. [Google Scholar] [CrossRef]
  33. Zhao, C.N.; Li, Y.S.; Lu, T. Analysis and Design of Fractional Order System; National Defense Industry Press: Beijing, China, 2011. [Google Scholar]
  34. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; Moura Oliveira, P.B.; Boaventura Cunha, J.; Mendes, L. Particle Swarm Optimization with Fractional-Order Velocity. Nonlinear Dyn. 2010, 61, 295–301. [Google Scholar] [CrossRef] [Green Version]
  35. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; Moura Oliveira, P.B. Fractional Particle Swarm Optimization. In Mathematical Methods in Engineering; Fonseca, N.M., Tenreiro Machado, J.A., Eds.; Springer: London, UK, 2014; pp. 47–56. [Google Scholar]
  36. Gao, Z.; Wei, J.; Liang, C.; Yan, M. Fractional-Order Particle Swarm Optimization. In Proceedings of the 26th Chinese Control and Decision Conference (CCDC), Changsha, China, 31 May–2 June 2014. [Google Scholar]
  37. Couceiro, M.; Ghamisi, P. Fractional Order Darwinian Particle Swarm Optimization: Applications and Evaluation of an Evolutionary Algorithm; Springer: Cham, Switzerland, 2016; pp. 11–20. [Google Scholar]
  38. Guo, T.; Lan, J.L.; Li, Y.F.; Chen, S.W. Adaptive Fractional-Order Darwinian Particle Swarm Optimization Algorithm. J. Commun. 2014, 35, 130–140. [Google Scholar]
  39. Akbar, S.; Zaman, F.; Asif, M.; Rehman, A.U.; Raja, M.A.Z. Novel application of FO-DPSO for 2-D parameter estimation of electromagnetic plane waves. Neural Comput. Appl. 2019, 31, 3681–3690. [Google Scholar] [CrossRef]
  40. Ates, A.; Alagoz, B.B.; Kavuran, G.; Yeroglu, C. Implementation of fractional order filters discretized by modified fractional order darwinian particle swarm optimization. Measurement 2017, 107, 153–164. [Google Scholar] [CrossRef]
  41. Shahri, E.S.A.; Alfi, A.; Machado, J.T. Fractional fixed-structure H controller design using augmented lagrangian particle swarm optimization with fractional order velocity. Appl. Soft Comput. 2019, 77, 688–695. [Google Scholar] [CrossRef]
  42. Wei, J.R.; Ma, Y.; Xia, R.; Jiang, H.B.; Zhou, T.T. Image segmentation algorithm based on Otsu optimized by fractional-order particle swarm optimization. Comput. Eng. Des. 2017, 38, 3284–3290. [Google Scholar]
  43. Guo, F.; Peng, H.; Zou, B.; Zhao, R.; Liu, X. Localisation and segmentation of optic disc with the fractionalorder Darwinian particle swarm optimization algorithm. IET Image Process. 2018, 12, 1303–1312. [Google Scholar] [CrossRef]
  44. Ahilan, A.; Manogaran, G.; Raja, C.; Kadry, S.; Kumar, S.N.; Agees Kumar, C.; Jarin, T.; Krishnamoorthy, S.; Kumar, P.M.; Babu, G.C.; et al. Segmentation by Fractional Order Darwinian Particle Swarm Optimization Based Multilevel Thresholding and Improved Lossless Prediction Based Compression Algorithm for Medical Images. IEEE Access. 2019, 7, 89570–89580. [Google Scholar] [CrossRef]
  45. Tang, Q.; Gao, S.; Liu, Y.; Yu, F. Infrared image segmentation algorithm for defect detection based on FODPSO. Infrared Phys. Technol. 2019, 102, 103051. [Google Scholar] [CrossRef]
  46. Wang, Y.Y.; Peng, W.X.; Qiu, C.H.; Jiang, J.; Xia, S.R. Fractional-order Darwinian PSO-based feature selection for media-adventitia border detection in intravascular ultrasound images. Ultrasonics 2019, 92, 1–7. [Google Scholar] [CrossRef]
  47. Hosseini, S.A.; Hajipour, A.; Tavakoli, H. Design and optimization of a CMOS power amplifier using innovative fractional-order particle swarm optimization. Appl. Soft Comput. 2019, 85, 105831. [Google Scholar] [CrossRef]
  48. Akdağ, O.; Okumuş, F.; Kocamaz, A.F.; Yeroglu, C. Fractional Order Darwinian PSO with Constraint Threshold for Load Flow Optimization of Energy Transmission System. Gazi Univ. J. Sci. 2018, 31, 831–844. [Google Scholar]
  49. Zameer, A.; Muneeb, M.; Mirza, S.M.; Raja, M.A.Z. Fractional-order particle swarm based multi-objective PWR core loading pattern optimization. Ann. Nucl. Energy 2020, 135, 106982. [Google Scholar] [CrossRef]
  50. Wang, Y.Y.; Zhang, H.; Qiu, C.H.; Xia, S.R. A Novel Feature Selection Method Based on Extreme Learning Machine and Fractional-Order Darwinian PSO. Comput. Intell. Neurosci. 2018, 2018, 5078268. [Google Scholar] [CrossRef]
  51. Taguchi, G.; Chowdhury, S.; Taguchi, S. Robust Engineering; McGraw-Hill: New York, NY, USA, 2000. [Google Scholar]
  52. Wang, Y.; Fang, K.T. A Note on Uniform Distribution and Experimental Design. Chin. Sci. Bull. 1981, 26, 485–489. [Google Scholar]
  53. Tsao, H.; Lee, L. Uniform Layout Implement on Matlab. Stat. Decis. 2008, 6, 144–146. [Google Scholar]
  54. Tsai, J.T.; Yang, P.Y.; Chou, J.H. Data-Driven Approach to Using Uniform Experimental Design to Optimize System Compensation Parameters for an Auto-Alignment Machine. IEEE Access. 2018, 6, 40365–40378. [Google Scholar] [CrossRef]
  55. Diethelm, K. The Analysis of Fractional Differential Equations—An Application-Oriented Exposition Using Differential Operators of Caputo Type; Springer: London, UK, 2010. [Google Scholar]
  56. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  57. Guo, B.; Pu, X.; Huang, F. Fractional Partial Differential Equations and Their Numerical Solutions; World Scientific: Singapore, 2015. [Google Scholar]
  58. Cui, Z.H.; Zeng, J.C. Particle Swarm Optimization; Science Press: Beijing, China, 2011. [Google Scholar]
  59. Liu, H.; Zhai, R.; Fu, J.; Wang, Y.; Yang, Y. Optimization study of thermal-storage PV-CSP integrated system based on GA-PSO algorithm. Sol. Energy 2019, 184, 391–409. [Google Scholar] [CrossRef]
  60. Aydilek, İ.B. A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl. Soft Comput. 2018, 66, 232–249. [Google Scholar] [CrossRef]
Table 1. A ten-level uniform layout of U10(104).
Table 1. A ten-level uniform layout of U10(104).
Experiment NumberColumn Numbers
1257
11257
224103
336410
44896
551032
66189
77325
88571
99718
1010964
Table 2. Ten-level uniform layout of U10(104) for the proposed adaptive-uniform-experimental-design-based fractional particle swarm optimizer with non-linear time-varying evolution (AUFPSO-NTE).
Table 2. Ten-level uniform layout of U10(104) for the proposed adaptive-uniform-experimental-design-based fractional particle swarm optimizer with non-linear time-varying evolution (AUFPSO-NTE).
Experiment NumberExperimental Parameters
λαβγ
100.22220.88891.3333
20.22220.666720.4444
30.44441.11110.66672
40.66671.55561.77781.1111
50.888920.44440.2222
61.111101.55561.7778
71.33330.44440.22220.8889
81.55560.88891.33330
91.77781.333301.5556
1021.77781.11110.6667
Table 3. New levels obtained after the updated algorithm was executed in this instance.
Table 3. New levels obtained after the updated algorithm was executed in this instance.
LevelsParameters
P1P2P3P4
10000.4455
20.11730.13900.15060.6182
30.23450.27810.30120.7909
40.35180.41710.45190.9637
50.46900.55620.60251.1364
60.58630.69520.75311.3091
70.70350.83430.90371.4818
80.82080.97331.05441.6546
90.93801.11241.20501.8273
101.05531.25141.35562
Table 4. Benchmark functions.
Table 4. Benchmark functions.
NameDefinitionSolution SpaceOptimal Value
Bohachevsky 1 f 1 = v 1 2 + 2 v 2 2 0.3 cos ( 3 π v 1 ) 0.4 cos ( 4 π v 2 ) + 0.7 [−50,50]20
Colville f 2 = 100 ( v 2 v 1 2 ) 2 + ( 1 v 1 ) 2 + 90 ( v 4 v 3 2 ) 2 + ( 1 v 3 ) 2 + 10.1 [ ( v 2 1 ) 2 + ( v 4 1 ) 2 ] + 19.8 ( v 2 1 ) ( v 4 1 ) [−10,10]40
Drop wave f 3 = 1 + cos ( 12 v 1 2 + v 2 2 ) 0.5 ( v 1 2 + v 2 2 ) + 2 [−10,10]2−1
Easom f 4 = cos ( v 1 ) cos ( v 2 ) e ( v 1 π ) 2 ( v 2 π ) 2 [−100,100]2−1
Rastrigin f 5 = i = 1 G ( v i 2 10 cos ( 2 π v i ) + 10 ) [−5.12,5.12]G0
Michalewiczs f 6 = i = 1 G sin ( v i ) [ sin ( i + 1 ) v i 2 π ] 2 ι [0,π]2−1.8409
Rosenbrock’s valley f 7 = i = 1 G 1 100 ( v i + 1 v i 2 ) 2 [−2.048,2.048]20
Sphere f 8 = i = 1 G v i 2 [−100,100]G0
Ackley f 9 = 20 exp ( 0.2 i = 1 G v i 2 G ) + exp ( i = 1 G cos 2 π v i G ) + 20 + e [−32,32]G0
Rosenbrock f 10 = i = 1 G 1 [ 100 ( v i + 1 v i 2 ) 2 + ( 1 v i ) 2 ] [−30,30]G0
Griewank f 11 = 1 4000 i = 1 G v i 2 i = 1 G cos ( v i i ) + 1 [−600,600]G0
DeJong F4 f 12 = i = 1 G v i 4 [−20,20]G0
Schwefel’s P1.2 f 13 = i = 1 G ( j = 1 i v j ) 2 [−100,100]G0
Quartic f 14 = i = 1 G i v i 4 + r a n d o m [ 0 ,   1 ) [−1.28,1.28]G0
Salomon f 15 = 1 cos ( 2 π i = 1 G v i 2 ) + 0.1 i = 1 G v i 2 [−100,100]G0
Table 5. Settings for number of dimensions, number of particles, and number of iterations in the proposed AUFPSO-NTE and for FPSO, PSO-FOV, MPSO, and PSO in Example (1).
Table 5. Settings for number of dimensions, number of particles, and number of iterations in the proposed AUFPSO-NTE and for FPSO, PSO-FOV, MPSO, and PSO in Example (1).
FunctionNumber of Dimension (Dn)Number of Particles (S)Number of Iterations (I)
f1Bohachevsky 1210200
f2Colville410200
f3Drop wave210200
f4Easom210200
f5Rastrigin3010200
f6Michalewiczs210200
f7Rosenbrock’s valley410200
Table 6. Parameter settings for the proposed AUFPSO-NTE and for FPSO, PSO-FOV, MPSO, and PSO in Example (1).
Table 6. Parameter settings for the proposed AUFPSO-NTE and for FPSO, PSO-FOV, MPSO, and PSO in Example (1).
TermsAlgorithms
AUFPSO-NTEFPSOPSO-FOVMPSOPSO
ωmin0.4N/AN/A0.4N/A
max0.90.9
c1min02222
max2
c2min02222
max2
Table 7. The best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (1).
Table 7. The best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (1).
Functionλαβγ
Bohachevsky 10.66671.55561.77781.1111
Colville0.88520.30101.25330.9829
Drop wave0.03200.35620.50701.5989
Easom1.00350.39111.50221.4322
Rastrigin1.33330.44440.22220.8889
Michalewiczs0.98670.24891.91401.7678
Rosenbrock’s valley1.33330.44440.22220.8889
Table 8. The λ parameter values for each benchmark function of FPSO in Example (1).
Table 8. The λ parameter values for each benchmark function of FPSO in Example (1).
Functionλ
Bohachevsky 11.65
Colville1.75
Drop wave1.43
Easom1.98
Rastrigin1.99
Michalewiczs1.99
Rosenbrock’s valley1.66
Table 9. The λ parameter values for each benchmark function of PSO-FOV in Example (1).
Table 9. The λ parameter values for each benchmark function of PSO-FOV in Example (1).
Functionλ
Bohachevsky 10.35
Colville0.57
Drop wave0.57
Easom0.32
Rastrigin0.54
Michalewiczs0.19
Rosenbrock’s valley0.41
Table 10. Superior results obtained by the proposed AUFPSO-NTE in comparison with FPSO, PSO-FOV, MPSO, and PSO in 30 independent trials in Example (1).
Table 10. Superior results obtained by the proposed AUFPSO-NTE in comparison with FPSO, PSO-FOV, MPSO, and PSO in 30 independent trials in Example (1).
FunctionTermsAUFPSO-NTEFPSOPSO-FOVMPSOPSO
Bohachevsky 1Best00000.1118
Mean05.6621 × 10−160.01380.02755.1798
S.D.01.8519 × 10−150.07540.10486.5905
ColvilleBest7.7920 × 10−51.3908 × 10−34.0173 × 10−30.200211.7514
Mean1.69062.47862.83185.9281223.7255
S.D.1.80182.07022.80689.8226462.9368
Drop waveBest−1−1−1−1−0.9803
Mean−1−0.9763−0.9617−0.9660−0.7892
S.D.1.9387 × 10−110.03100.03180.03240.1580
EasomBest−1−1−1−1−0.9103
Mean−1−10.9998−0.9998−0.0721
S.D.7.6828 × 10−116.8674 × 10−69.0521 × 10−41.0572 × 10−30.1996
RastriginBest0094.13430.015124.6608
Mean5.1159 × 10−140.1044153.9817133.8380223.4087
S.D.6.5665 × 10−140.302527.104677.864375.7597
MichalewiczsBest−1.8409−1.8409−1.8409−1.8409−1.8388
Mean−1.8409−1.8409−1.8409−1.8409−1.8388
S.D.2.9272 × 10−109.4381 × 10−81.2377 × 10−73.9510 × 10−74.2741 × 10−7
Rosenbrock’s valleyBest4.0000 × 10−341.2791 × 10−214.8507 × 10−163.9505 × 10−140.0177
Mean3.9129 × 10−221.3965 × 10−42.5900 × 10−32.8691 × 10−34.2239
S.D.1.6158 × 10−216.4890 × 10−40.00890.01577.7636
Table 11. Settings for the number of dimensions, number of particles, and number of iterations in the proposed AUFPSO-NTE, fractional-order position (FP-PSO), fractional-order velocity (FV-PSO), FVFP-PSO, and PSO in Example (2).
Table 11. Settings for the number of dimensions, number of particles, and number of iterations in the proposed AUFPSO-NTE, fractional-order position (FP-PSO), fractional-order velocity (FV-PSO), FVFP-PSO, and PSO in Example (2).
FunctionNumber of Dimension (Dn)Number of Particles (S)Number of Iterations (I)
f5Rastrigin1030300
f8Sphere1030300
f9Ackley1030300
f10Rosenbrock1030300
f11Griewank1030300
Table 12. Parameters settings for the proposed AUFPSO-NTE, FVFP-PSO, FP-PSO, FV-PSO, and PSO in Example (2).
Table 12. Parameters settings for the proposed AUFPSO-NTE, FVFP-PSO, FP-PSO, FV-PSO, and PSO in Example (2).
TermsAlgorithms
AUFPSO-NTEFVFP-PSOFP-PSOFV-PSOPSO
ωmin0.40.40.40.4N/A
max0.90.90.90.9
c1min01111
max2
c2min01111
max2
εminN/A0.110.1N/A
max1.21.2
ζminN/A0.10.11N/A
max1.21.2
Table 13. The best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (2).
Table 13. The best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (2).
Functionλαβγ
Rastrigin1.33330.44440.22220.8889
Sphere1.46890.53060.25950.9879
Ackley1.55380.35190.12480.7688
Rosenbrock1.201.72351.8864
Griewank1.51110.27650.11360.8
Table 14. Means obtained using the proposed AUFPSO-NTE and for FVFP-PSO, FP-PSO, FV-PSO, and PSO for 5 benchmark functions in 100 independent trials in Example (2).
Table 14. Means obtained using the proposed AUFPSO-NTE and for FVFP-PSO, FP-PSO, FV-PSO, and PSO for 5 benchmark functions in 100 independent trials in Example (2).
FunctionAUFPSO-NTEFVFP-PSOFP-PSOFV-PSOPSO
Rastrigin003.418220.335118.3371
Sphere2.8280 × 10−418.9588 × 10−361.9469 × 10−19941.43388.2506 × 10−12
Ackley8.8818 × 10−168.4555 × 10−150.029910.44550.0231
Rosenbrock7.78818.06338.82672.5590 × 10556.5664
Griewank00.00130.377010.39370.1041
Table 15. Number of dimensions, number of particles, and number of iterations for the proposed AUFPSO-NTE and for AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO in Example (3).
Table 15. Number of dimensions, number of particles, and number of iterations for the proposed AUFPSO-NTE and for AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO in Example (3).
FunctionNumber of Dimension (Dn)Number of Particles (S)Number of Iterations (I)
f5Rastrigin30301000
f8Sphere30301000
f9Ackley30301000
f11Griewank30301000
f12DeJong F430301000
Table 16. Parameter settings for the proposed AUFPSO-NTE and for AFO-DPSO, NCPSO, FO-DPSO, FO-PSO, APSO, DPSO, HPSO, and PSO in Example (3).
Table 16. Parameter settings for the proposed AUFPSO-NTE and for AFO-DPSO, NCPSO, FO-DPSO, FO-PSO, APSO, DPSO, HPSO, and PSO in Example (3).
TermsAlgorithms
AUFPSO-NTEAFO-FPSONCPSOFO-DPSOFPSO
ωmin0.410.72980.90.9
max0.9
c1min01.51.49621.51.5
max22.5
c2min01.51.49621.51.5
max22.5
Table 17. Parameter settings for the proposed AUFPSO-NTE and for AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO in Example (3).
Table 17. Parameter settings for the proposed AUFPSO-NTE and for AFO-DPSO, NCPSO, FO-DPSO, FPSO, APSO, DPSO, HPSO, and PSO in Example (3).
TermsAlgorithms
AUFPSO-NTEAPSODPSOHPSOPSO
ωmin0.4Auto-control0.90.20.4
max0.90.80.9
c1min0Auto-controlN/A2.52
max2
c2min0Auto-controlN/A2.52
max2
Table 18. Best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (3).
Table 18. Best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (3).
Functionλαβγ
Rastrigin1.111101.55561.7778
Sphere1.33330.44440.22220.8889
Ackley1.38130.93680.92271.7236
Griewank0.44441.11110.66672
DeJong F41.33330.44440.22220.8889
Table 19. Means obtained using the proposed AUFPSO-NTE and using AFO-FPSO, NCPSO, FO-DPSO, and FO-PSO for 5 benchmark functions in 30 independent trials.
Table 19. Means obtained using the proposed AUFPSO-NTE and using AFO-FPSO, NCPSO, FO-DPSO, and FO-PSO for 5 benchmark functions in 30 independent trials.
FunctionAUFPSO-NTEAFO-FPSONCPSOFO-DPSOFO-PSO
Rastrigin01.8956 × 10−104.3741 × 10−44.2305 × 10−53.5000 × 10−3
Sphere1.5042 × 10−1242.3420 × 10−142.0279 × 10−93.4728 × 10−71.5340 × 10−5
Ackley8.8818 × 10−163.6610 × 10−119.0869 × 10−71.3774 × 10−61.4000 × 10−6
Griewank009.9050 × 10−118.1377 × 10−91.4184 × 10−7
DeJong F41.3852 × 10−2556.3364 × 10−232.0809 × 10−178.8098 × 10−169.2521 × 10−12
Table 20. Means obtained using the proposed AUFPSO-NTE and using APSO, DPSO, HPSO, and PSO for 5 benchmark functions in 30 independent trials.
Table 20. Means obtained using the proposed AUFPSO-NTE and using APSO, DPSO, HPSO, and PSO for 5 benchmark functions in 30 independent trials.
FunctionAUFPSO-NTEAPSODPSOHPSOPSO
Rastrigin01.01001.98994.8642106.55
Sphere1.5042 × 10−1241.4500 × 10−100.03280.3876370.04
Ackley8.8818 × 10−160.35502.40835.697211.4953
Griewank00.01677.400 × 10−30.02372.6100 × 107
DeJong F41.3852 × 10−2552.1300 × 10−101.3752 × 10−50.06354.3467 × 103
Table 21. Variances in the optimum obtained using the proposed AUFPSO-NTE, AFO-FPSO, NCPSO, FO-DPSO, and FO-PSO for 5 benchmark functions in 30 independent trials.
Table 21. Variances in the optimum obtained using the proposed AUFPSO-NTE, AFO-FPSO, NCPSO, FO-DPSO, and FO-PSO for 5 benchmark functions in 30 independent trials.
FunctionAUFPSO-NTEAFO-FPSONCPSOFO-DPSOFO-PSO
Rastrigin00.00170.00430.01370.0232
Sphere1.6599 × 10−2450.00310.05970.40910.7505
Ackley2.9170 × 10−611.4637 × 10−53.5416 × 10−45.9058 × 10−40.0025
Griewank00.01160.19120.65740.8151
DeJong F400.02010.27650.73440.8836
Table 22. Variances in the optimum obtained using the proposed AUFPSO-NTE, APSO, DPSO, HPSO, and PSO for 5 benchmark functions in 30 independent trials.
Table 22. Variances in the optimum obtained using the proposed AUFPSO-NTE, APSO, DPSO, HPSO, and PSO for 5 benchmark functions in 30 independent trials.
FunctionAUFPSO-NTEAPSODPSOHPSOPSO
Rastrigin00.01730.07740.21620.3488
Sphere1.6599 × 10−2450.51261.00681.60222.0978
Ackley2.9170 × 10−610.00110.01620.02000.9074
Griewank00.68190.93711.36581.6408
DeJong F400.83810.96111.01301.7960
Table 23. Settings for the number of dimensions, number of particles, and number of iterations in the proposed AUFPSO-NTE and for HAFPSO, GAPSO, HFPSO, FPSO, and PSO in Example (4).
Table 23. Settings for the number of dimensions, number of particles, and number of iterations in the proposed AUFPSO-NTE and for HAFPSO, GAPSO, HFPSO, FPSO, and PSO in Example (4).
FunctionNumber of Dimension (Dn)Number of Particles (S)Number of Iterations (I)
f5Rastrigin50301000
f8Sphere50301000
f10Rosenbrock50301000
f11Griewank50301000
f13Schwefel P1.250301000
f14Quartic50301000
f15Salomon50301000
Table 24. Best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (4).
Table 24. Best combinations of the constant coefficients for each benchmark function of the proposed AUFPSO-NTE in Example (4).
Functionλαβγ
Rastrigin1.249401.63611.8209
Sphere1.33330.44440.22220.8889
Rosenbrock1.77781.333301.5556
Griewank0.27651.20.48892
Schwefel P1.21.33330.44440.22220.8889
Quartic1.51110.27650.11360.8
Salomon1.30871.68890.85920.0889
Table 25. Performance results obtained using the proposed AUFPSO-NTE in comparison with HAFPSO, GAPSO, HFPSO, FPSO, and PSO for 7 benchmark functions in 100 independent trials in Example (4).
Table 25. Performance results obtained using the proposed AUFPSO-NTE in comparison with HAFPSO, GAPSO, HFPSO, FPSO, and PSO for 7 benchmark functions in 100 independent trials in Example (4).
FunctionTermsAUFPSO-NTEHAFPSOGAPSOHFPSOFPSOPSO
RastriginMean02.18 × 10−26.76 × 1018.83 × 1017.40 × 1017.90 × 101
S.D.02.83 × 10−21.84 × 1013.08 × 1012.04 × 1011.86 × 101
SphereMean1.89 × 10−1212.15 × 10−93.67 × 10−31.43 × 10−57.51 × 10−34.57 × 10−5
S.D.1.75 × 10−1203.54 × 10−96.11 × 10−38.05 × 10−63.20 × 10−21.61 × 10−4
RosenbrockMean4.88 × 1011.00 × 1028.14 × 1011.53 × 1021.16 × 1021.06 × 102
S.D.1.06 × 10−15.57 × 1014.16 × 1015.93 × 1015.56 × 1014.86 × 101
GriewankMean01.27 × 10−21.60 × 10−29.88 × 10−13.64 × 10−25.81 × 10−2
S.D.01.54 × 10−22.06 × 10−26.49 × 10−26.61 × 10−28.96 × 10−2
Schwefel P1.2Mean6.98 × 10−1231.03 × 1033.99 × 1011.15 × 1036.32 × 1022.22 × 103
S.D.6.49 × 10−1224.83 × 1023.53 × 1011.09 × 1034.32 × 1028.84 × 102
QuarticMean1.34 × 10−41.34 × 10−24.46 × 10−23.70 × 10−26.45 × 10−25.23 × 10−2
S.D.9.80 × 10−53.91 × 10−31.17 × 10−21.28 × 10−22.12 × 10−21.90 × 10−2
SalomonMean7.99 × 10−36.40 × 10−17.85 × 10−11.151.271.13
S.D.2.72 × 10−27.91 × 10−21.38 × 10−11.99 × 10−13.60 × 10−12.74 × 10−1

Share and Cite

MDPI and ACS Style

Yang, P.-Y.; Chou, F.-I.; Tsai, J.-T.; Chou, J.-H. Adaptive-Uniform-Experimental-Design-Based Fractional-Order Particle Swarm Optimizer with Non-Linear Time-Varying Evolution. Appl. Sci. 2019, 9, 5537. https://doi.org/10.3390/app9245537

AMA Style

Yang P-Y, Chou F-I, Tsai J-T, Chou J-H. Adaptive-Uniform-Experimental-Design-Based Fractional-Order Particle Swarm Optimizer with Non-Linear Time-Varying Evolution. Applied Sciences. 2019; 9(24):5537. https://doi.org/10.3390/app9245537

Chicago/Turabian Style

Yang, Po-Yuan, Fu-I Chou, Jinn-Tsong Tsai, and Jyh-Horng Chou. 2019. "Adaptive-Uniform-Experimental-Design-Based Fractional-Order Particle Swarm Optimizer with Non-Linear Time-Varying Evolution" Applied Sciences 9, no. 24: 5537. https://doi.org/10.3390/app9245537

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop