# Grand Tour Algorithm: Novel Swarm-Based Optimization for High-Dimensional Problems

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Iterative Optimization Processes: A General View

#### 2.2. Particle Swarm Optimization (PSO)

#### 2.3. Grand Tour Algorithm (GTA)

#### 2.3.1. GTA Fundamentals

#### 2.3.2. Calculation of Drag Force

^{2}. The wind speed is considered to be null.

#### 2.3.3. Calculation of Gravitational Force

#### 2.3.4. Velocity and Position Updating

#### 2.4. Test Conditions

## 3. Results

#### 3.1. Performance

^{−8}[34]; (v) mean of objective function ($OF$) evaluations; and (vi) score. Figure 2a,b graphically presents values (ii) and (v), respectively. This score is a parameter considered for the CEC 2017 competition in [44] to compare different optimization algorithms, and is expressed by Equations (12) and (13).

^{−8}. For the Rosenbrock and Dixon Price functions, none of the algorithms achieved the global minimum. However, the GTA was the closest, with a low standard deviation, which indicates the determination of a local minimum. These two functions are well-known examples of very challenging benchmark optimization functions, which have presented many difficulties for all metaheuristics; see, for example [45,46,47], which use different algorithms and report the same difficulty in reaching the global minimum for these functions despite using less than 100 variables. Finally, for the Salomon function, the GTA presents the worst success rate, while GA presents the best one. However, the mean value reached is very close to the global minimum, and the use of more cyclists, or an increase in the maximum number of iterations, could solve this problem. For the other algorithms, only for the function Sum of Powers, this change of settings could lead to a better success rate, since for the other functions, the value found is too far from the global minimum. The reliability of the GTA is also confirmed by its score, which is not the best only for the Salomon function. To evaluate the convergence speed, Figure 3 shows the $OF$ evolution for each of the benchmarking functions using GTA. It can be seen that a minimum is rapidly achieved, with a finetuning in the following iterations. Comparing the values of the number of $OF$ evaluations in Table 3 for the Salomon and Sum of Powers functions, where the results of all five algorithms are close, the GTA was the one with the lowest number of evaluations, thus requiring less computational effort.

#### 3.2. Sensitivity Analysis

## 4. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cyb.
**2019**, 11, 1501–1529. [Google Scholar] - Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Soft.
**2016**, 95, 51–67. [Google Scholar] [CrossRef] - Chatterjee, A.; Siarry, P. Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. Comput. Oper. Res.
**2006**, 33, 859–871. [Google Scholar] [CrossRef] - Dorigo, M.; Blum, C. Ant colony optimization theory: A survey. Theor. Comput. Sci.
**2005**, 344, 243–278. [Google Scholar] [CrossRef] - Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Global Optim.
**2007**, 39, 459–471. [Google Scholar] [CrossRef] - Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput.
**2013**, 29, 17–35. [Google Scholar] [CrossRef] - Kennedy, J.; Eberhart, R. Particle swarm optimization (PSO). In Proceedings of the IEEE Intern Conf Neural Net, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
- Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science
**1983**, 220, 671–680. [Google Scholar] [CrossRef] - Gonzalez-Fernandez, Y.; Chen, S. Leaders and followers—A new metaheuristic to avoid the bias of accumulated information. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 776–783. [Google Scholar]
- Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimization method for constrained optimization problems. Intell. Tech. Theory Appl. New Trends Intell. Tech.
**2002**, 76, 214–220. [Google Scholar] - Wu, Z.Y.; Simpson, A.R. A self-adaptive boundary search genetic algorithm and its application to water distribution systems. J. Hydr. Res.
**2002**, 40, 191–203. [Google Scholar] [CrossRef] [Green Version] - Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process Lett.
**2003**, 85, 317–325. [Google Scholar] [CrossRef] - Brentan, B.; Meirelles, G.; Luvizotto, E., Jr.; Izquierdo, J. Joint operation of pressure-reducing valves and pumps for improving the efficiency of water distribution systems. J. Water Res. Plan. Manag.
**2018**, 144, 04018055. [Google Scholar] [CrossRef] [Green Version] - Freire, R.Z.; Oliveira, G.H.; Mendes, N. Predictive controllers for thermal comfort optimization and energy savings. Ener. Build.
**2008**, 40, 1353–1365. [Google Scholar] [CrossRef] - Banga, J.R.; Seider, W.D. Global optimization of chemical processes using stochastic algorithms. In State of the Art in Global Optimization; Springer: Boston, MA, USA, 1996; pp. 563–583. [Google Scholar]
- Waziruddin, S.; Brogan, D.C.; Reynolds, P.F. The process for coercing simulations. In Proceedings of the 2003 Fall Simulation Interoperability Workshop, Orlando, FL, USA, 14–19 September 2003. [Google Scholar]
- Carnaham, J.C.; Reynolds, P.F.; Brogan, D.C. Visualizing coercible simulations. In Proceedings of the 2004 Winter Simulation Conference, Washington, DC, USA, 5–8 December 2004; Volume 1. [Google Scholar]
- Bollinger, A.; Evins, R. Facilitating model reuse and integration in an urban energy simulation platform. Proc. Comput. Sci.
**2015**, 51, 2127–2136. [Google Scholar] [CrossRef] [Green Version] - Yang, Y.; Chui, T.F.M. Developing a Flexible Simulation-Optimization Framework to Facilitate Sustainable Urban Drainage Systems Designs through Software Reuse. In Proceedings of the International Conference on Software and Systems Reuse, Cincinnati, OH, USA, 26–28 June 2019. [Google Scholar] [CrossRef]
- Yazdani, C.; Nasiri, B.; Azizi, R.; Sepas-Moghaddam, A.; Meybodi, M.R. Optimization in Dynamic Environments Utilizing a Novel Method Based on Particle Swarm Optimization. Int. J. Artif. Intel.
**2013**, 11, A13. [Google Scholar] - Wang, Z.-J.; Zhan, Z.-H.; Du, K.-J.; Yu, Z.-W.; Zhang, J. Orthogonal learning particle swarm optimization with variable relocation for dynamic optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016. [Google Scholar] [CrossRef]
- Mavrovouniotis, M.; Lib, C.; Yang, S. A survey of swarm intelligence for dynamic optimization: Algorithms and applications. Swarm Evol. Comput.
**2017**, 33, 1–17. [Google Scholar] [CrossRef] [Green Version] - Gore, R.; Reynolds, P.F.; Tang, L.; Brogan, D.C. Explanation exploration: Exploring emergent behavior. In Proceedings of the 21st International Workshop on Principles of Advanced and Distributed Simulation (PADS’07), San Diego, CA, USA, 12–15 June 2007; IEEE: Washington, DC, USA, June 2007. [Google Scholar] [CrossRef] [Green Version]
- Gore, R.; Reynolds, P.F. Applying causal inference to understand emergent behavior. In Proceedings of the 2008 Winter Simulation Conference, Miami, FL, USA, 7–10 December 2008; IEEE: Washington, DC, USA, December 2008. [Google Scholar] [CrossRef]
- Kim, V. A Design Space Exploration Method for Identifying Emergent Behavior in Complex Systems. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, USA, December 2016. [Google Scholar]
- Hybinette, M.; Fujimoto, R.M. Cloning parallel simulations. ACM Trans. Model. Comput. Simul. (TOMACS)
**2001**, 11, 378–407. [Google Scholar] [CrossRef] - Hybinette, M.; Fujimoto, R. Cloning: A novel method for interactive parallel simulation. In Proceedings of the WSC97: 29th Winter Simulation Conference, Atlanta, GA, USA, 7–10 December 1997; pp. 444–451. [Google Scholar] [CrossRef]
- Chen, D.; Turner, S.J.; Cai, W.; Gan, B.P. Low MYH Incremental HLA-Based Distributed Simulation Cloning. In Proceedings of the 2004 Winter Simulation Conference, Washington, DC, USA, 5–8 December 2004; IEEE: Washington, DC, USA, December 2004. [Google Scholar] [CrossRef]
- Li, Z.; Wang, W.; Yan, Y.; Li, Z. PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional optimization problems. Exp. Syst. Appl.
**2015**, 42, 8881–8895. [Google Scholar] [CrossRef] - Montalvo, I.; Izquierdo, J.; Pérez-García, R.; Herrera, M. Water distribution system computer-aided design by agent swarm optimization. Comput. Aided Civil Infrastr. Eng.
**2014**, 29, 433–448. [Google Scholar] [CrossRef] - Maringer, D.G. Portfolio Management with Heuristic Optimization; Springer Science & Business Media: Boston, MA, USA, 2006. [Google Scholar] [CrossRef] [Green Version]
- Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
- Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation
**2001**, 76, 60–68. [Google Scholar] [CrossRef] - Blocken, B.; van Druenen, T.; Toparlar, Y.; Malizia, F.; Mannion, P.; Andrianne, T.; Marchal, T.; Maas, G.J.; Diepens, J. Aerodynamic drag in cycling pelotons: New insights by CFD simulation and wind tunnel testing. J. Wind Eng. Ind. Aerod.
**2018**, 179, 319–337. [Google Scholar] [CrossRef] - MATLAB 2018; The MathWorks, Inc.: Natick, MA, USA, 2018.
- Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput.
**2002**, 6, 58–73. [Google Scholar] [CrossRef] [Green Version] - Eberhart, R.C.; Shi, Y. Comparing inertia weights and constriction factors in particle swarm optimization. In Proceedings of the 2000 Congress on Evolutionary Computation—CEC00 (Cat. No.00TH8512), La Jolla, CA, USA, 16–19 July 2000; Volume 1, pp. 84–88. [Google Scholar]
- GAMS World, GLOBAL Library. Available online: http://www.gamsworld.org/global/globallib.html (accessed on 29 April 2020).
- Gould, N.I.M.; Orban, D.; Toint, P.L. CUTEr, A Constrained and Un-Constrained Testing Environment, Revisited. Available online: http://cuter.rl.ac.uk/cuter-www/problems.html (accessed on 29 April 2020).
- GO Test Problems. Available online: http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm (accessed on 29 April 2020).
- Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Num. Optim.
**2013**, 4, 150–194. [Google Scholar] [CrossRef] [Green Version] - Sharma, G. The Human Genome Project and its promise. J. Indian College Cardiol.
**2012**, 2, 1–3. [Google Scholar] [CrossRef] - Li, W. On parameters of the human genome. J. Theor. Biol.
**2011**, 288, 92–104. [Google Scholar] [CrossRef] - Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, November 2016. [Google Scholar]
- Hughes, M.; Goerigk, M.; Wright, M. A largest empty hypersphere metaheuristic for robust optimisation with implementation uncertainty. Comput. Oper. Res.
**2019**, 103, 64–80. [Google Scholar] [CrossRef] - Zaeimi, M.; Ghoddosian, A. Color harmony algorithm: An art-inspired metaheuristic for mathematical function optimization. Soft Comput.
**2020**, 24, 12027–12066. [Google Scholar] [CrossRef] - Singh, G.P.; Singh, A. Comparative Study of Krill Herd, Firefly and Cuckoo Search Algorithms for Unimodal and Multimodal Optimization. J. Intel. Syst. App.
**2014**, 2, 26–37. [Google Scholar] [CrossRef] - Taheri, S.M.; Hesamian, G. A generalization of the Wilcoxon signed-rank test and its applications. Stat. Papers
**2013**, 54, 457–470. [Google Scholar] [CrossRef]

**Figure 2.**Results of 100 runs with 1000 variables: (

**a**) OF value average; (

**b**) OF evaluations average.

**Figure 4.**Results of 100 runs with 20,000 variables: (

**a**) OF value average; (

**b**) OF evaluations average.

**Figure 5.**Sensitivity analysis for the number of cyclists and the maximum number of iterations: (

**a**) Objective function variation; (

**b**) Number of objective function evaluations.

Algorithm | Settings |
---|---|

GTA | Drag and Gravitational Coefficients = range from 0.5 to 1.0 |

PSO | Cognitive and Social Coefficients = 1.49 Inertia coefficient = varying from 0.1 to 1.1, linearly, |

SA | Initial Temperature = 100 °C Reanneling Interval = 100 |

GA | Crossover Fraction = 0.8 Elite Count = 0.05 Mutation rate = 0.01 |

HS | Harmony Memory Considering Rate = 0.8 Pitching Adjust Rate = 0.1 |

General | Maximum iterations = 500 Population Size = 100 Tolerance = 10 ^{−12}Maximum Stall Iteration = 20 |

Function | Equation | $\mathbf{Search}\mathbf{Space}{\left[{\mathit{x}}_{\mathbf{m}\mathbf{i}\mathbf{n}},{\mathit{x}}_{\mathbf{m}\mathbf{a}\mathbf{x}}\right]}^{\mathit{n}}$ | Global Minimum |
---|---|---|---|

Sphere | $f\left(x\right)={\displaystyle \sum _{i=1}^{n}}{x}_{i}^{2}$ | ${\left[-100,100\right]}^{n}$ | 0 |

Rosenbrock | $f\left(x\right)={\displaystyle \sum _{i=1}^{n-1}}\left[100{\left({x}_{i+1}-{x}_{i}^{2}\right)}^{2}+{\left({x}_{i}-1\right)}^{2}\right]$ | ${\left[-30,30\right]}^{n}$ | 0 |

Rastrigin | $f\left(x\right)={\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}^{2}-10\mathrm{cos}\left(2\pi {x}_{i}\right)+10\right]$ | ${\left[-5.12,5.12\right]}^{n}$ | 0 |

Griewank | $f\left(x\right)=\frac{1}{4000}{\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}^{2}-{\displaystyle \prod}_{i=1}^{n}\mathrm{cos}\left(\frac{{x}_{i}}{\sqrt{i}}+1\right)\right]$ | ${\left[-600,600\right]}^{n}$ | 0 |

Alpine | $f\left(x\right)={\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}\mathrm{sin}\left({x}_{i}\right)+0.1{x}_{i}\right]$ | ${\left[-10,10\right]}^{n}$ | 0 |

Brown | $f\left(x\right)={\displaystyle \sum _{i=1}^{n-1}}\left[{\left({x}_{i}^{2}\right)}^{{x}_{i+1}^{2}+1}+{\left({x}_{i+1}^{2}\right)}^{{x}_{i}^{2}+1}\right]$ | ${\left[-1,1\right]}^{n}$ | 0 |

Chung Reynolds | $f\left(x\right)={\left({\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}^{2}\right]\right)}^{2}$ | ${\left[-100,100\right]}^{n}$ | 0 |

Dixon Price | $f\left(x\right)={\left({x}_{1}-1\right)}^{2}+{\displaystyle \sum _{i=2}^{n}}\left[i{\left(2{x}_{i}^{2}-{x}_{i-1}\right)}^{2}\right]$ | ${\left[-10,10\right]}^{n}$ | 0 |

Exponential | $f\left(x\right)=-exp\left(-0.5{\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}^{2}\right]\right)+1$ | ${\left[-1,1\right]}^{n}$ | 0 |

Salomon | $f\left(x\right)=1-\mathrm{cos}\left(2\pi \sqrt{{\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}^{2}\right]}\right)+0.1\sqrt{{\displaystyle \sum}_{i=1}^{n}\left[{x}_{i}^{2}\right]}$ | ${\left[-100,100\right]}^{n}$ | 0 |

Schumer Steiglitz | $f\left(x\right)={\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}^{4}\right]$ | ${\left[-100,100\right]}^{n}$ | 0 |

Sum of Powers | $f\left(x\right)={\displaystyle \sum _{i=1}^{n}}\left[{\left|{x}_{i}\right|}^{i+1}\right]$ | ${\left[-1,1\right]}^{n}$ | 0 |

Sum of Squares | $f\left(x\right)={\displaystyle \sum _{i=1}^{n}}\left[i{x}_{i}^{2}\right]$ | ${\left[-1,1\right]}^{n}$ | 0 |

Zakharov | $f\left(x\right)={\displaystyle \sum _{i=1}^{n}}\left[{x}_{i}^{2}\right]+{\left({\displaystyle \sum _{i=1}^{n}}\left[0.5i{x}_{i}\right]\right)}^{2}+{\left({\displaystyle \sum _{i=1}^{n}}\left[0.5i{x}_{i}\right]\right)}^{4}$ | ${\left[-10,10\right]}^{n}$ | 0 |

Function | Sphere | Rosenbrock | Rastrigin | Griewank | Alpine | Brown | Chung Reynolds | Dixon Price | Exponential | Salomon | Schumer Steiglitz | Sum of Powers | Sum of squares | Zakharov | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Best | GTA | 4.4 × 10^{–18} | 9.9 × 10^{2} | 0.0 × 10^{0} | 0.0 × 10^{0} | 1.6 × 10^{–15} | 8.9 × 10^{–18} | 5.0 × 10^{–23} | 1.0 × 10^{0} | 0.0E × 10^{0} | 6.7 × 10^{–10} | 1.4 × 10^{–23} | 1.3 × 10^{–22} | 2.4 × 10^{–18} | 1.3 × 10^{–17} |

PSO | 6.4 × 10^{5} | 1.3 × 10^{9} | 8.4 × 10^{3} | 5.8 × 10^{3} | 9.0 × 10^{2} | 1.1 × 10^{2} | 3.9 × 10^{11} | 2.5 × 10^{8} | 1.0 × 10^{0} | 2.5 × 10^{–12} | 1.5 × 10^{9} | 1.1 × 10^{0} | 2.6 × 10^{4} | 2.4 × 10^{4} | |

SA | 2.9 × 10^{6} | 1.1 × 10^{10} | 1.5 × 10^{4} | 2.7 × 10^{4} | 2.4 × 10^{3} | 2.1 × 10^{2} | 8.7 × 10^{12} | 2.2 × 10^{9} | 1.0 × 10^{0} | 1.0 × 10^{–10} | 1.6 × 10^{10} | 6.4 × 10^{–2} | 4.5 × 10^{4} | 3.1 × 10^{4} | |

GA | 8.2 × 10^{2} | 2.6 × 10^{5} | 8.3 × 10^{3} | 1.1 × 10^{0} | 5.2 × 10^{2} | 2.9 × 10^{2} | 6.8 × 10^{5} | 3.8 × 10^{6} | 1.0 × 10^{0} | 0.0 × 10^{0} | 2.0 × 10^{3} | 5.3 × 10^{–3} | 7.7 × 10^{4} | 1.0 × 10^{3} | |

HS | 2.1 × 10^{6} | 7.9 × 10^{9} | 1.4 × 10^{4} | 1.8 × 10^{4} | 2.0 × 10^{3} | 3.6 × 10^{2} | 4.2 × 10^{12} | 1.8 × 10^{9} | 1.0 × 10^{0} | 1.4 × 10^{–13} | 1.0 × 10^{10} | 2.4 × 10^{–3} | 9.5 × 10^{4} | 3.0 × 10^{4} | |

Mean $OF$ | GTA | 2.3 × 10^{–15} | 1.0 × 10^{3} | 0.0 × 10^{0} | 1.6 × 10^{–15} | 6.9 × 10^{–14} | 3.3 × 10^{–15} | 8.1 × 10^{–18} | 1.0 × 10^{0} | 0.0 × 10^{0} | 1.3 × 10^{–6} | 1.3 × 10^{–17} | 1.5 × 10^{–16} | 3.4 × 10^{–15} | 1.7 × 10^{–15} |

PSO | 7.5 × 10^{5} | 1.7 × 10^{9} | 9.1 × 10^{3} | 6.9 × 10^{3} | 1.0 × 10^{3} | 1.4 × 10^{2} | 5.9 × 10^{11} | 3.6 × 10^{8} | 1.0 × 10^{0} | 6.4 × 10^{–7} | 2.1 × 10^{9} | 2.4 × 10^{0} | 3.3 × 10^{4} | 1.9 × 10^{5} | |

SA | 3.2 × 10^{6} | 1.3 × 10^{10} | 1.6 × 10^{4} | 3.0 × 10^{4} | 2.6 × 10^{3} | 2.6 × 10^{2} | 1.0 × 10^{13} | 2.6 × 10^{9} | 1.0 × 10^{0} | 4.5 × 10^{–7} | 1.8 × 10^{10} | 6.5 × 10^{–1} | 5.7 × 10^{4} | 3.4 × 10^{4} | |

GA | 9.2 × 10^{2} | 3.1 × 10^{5} | 8.9 × 10^{3} | 1.2 × 10^{0} | 5.6 × 10^{2} | 3.2 × 10^{2} | 8.5 × 10^{5} | 5.0 × 10^{6} | 1.0 × 10^{0} | 7.7 × 10^{–13} | 2.5 × 10^{3} | 4.0 × 10^{–2} | 8.8 × 10^{4} | 3.5 × 10^{4} | |

HS | 2.2 × 10^{6} | 8.6 × 10^{9} | 1.4 × 10^{4} | 1.9 × 10^{4} | 2.1 × 10^{3} | 3.7 × 10^{2} | 4.7 × 10^{12} | 2.0 × 10^{9} | 1.0 × 10^{0} | 1.9 × 10^{–7} | 1.1 × 10^{10} | 1.2 × 10^{–2} | 1.0 × 10^{5} | 1.1 × 10^{5} | |

Standard deviation | GTA | 5.3 × 10^{–15} | 9.1 × 10^{–1} | 0.0 × 10^{0} | 3.4 × 10^{–15} | 9.3 × 10^{–14} | 1.3 × 10^{–14} | 2.9 × 10^{–17} | 4.3 × 10^{–6} | 0.0 × 10^{0} | 3.2 × 10^{–6} | 6.5 × 10^{–17} | 5.3 × 10^{–16} | 8.7 × 10^{–15} | 4.4 × 10^{–15} |

PSO | 5.1 × 10^{4} | 1.9 × 10^{8} | 3.2 × 10^{2} | 5.1 × 10^{2} | 4.5 × 10^{1} | 9.7 × 10^{0} | 9.4 × 10^{10} | 4.7 × 10^{7} | 3.0 × 10^{–38} | 1.7 × 10^{–6} | 2.8 × 10^{8} | 4.6 × 10^{–1} | 2.7 × 10^{3} | 1.2 × 10^{6} | |

SA | 8.9 × 10^{4} | 5.4 × 10^{8} | 4.4 × 10^{2} | 8.1 × 10^{2} | 8.6 × 10^{1} | 2.3 × 10^{1} | 5.8 × 10^{11} | 1.5 × 10^{8} | 1.4 × 10^{–36} | 2.6 × 10^{–6} | 7.7 × 10^{8} | 3.9 × 10^{–1} | 5.2 × 10^{3} | 1.3 × 10^{3} | |

GA | 4.4 × 10^{1} | 2.6 × 10^{4} | 2.6 × 10^{2} | 3.8 × 10^{–2} | 1.7 × 10^{1} | 1.2 × 10^{1} | 7.4 × 10^{4} | 5.7 × 10^{5} | 2.1 × 10^{–39} | 5.3 × 10^{–12} | 2.9 × 10^{2} | 3.7 × 10^{–2} | 4.5 × 10^{3} | 3.3 × 10^{4} | |

HS | 3.7 × 10^{4} | 2.4 × 10^{8} | 1.6 × 10^{2} | 4.1 × 10^{2} | 3.2 × 10^{1} | 7.2 × 10^{0} | 1.9 × 10^{11} | 6.2 × 10^{7} | 3.0 × 10^{–49} | 6.8 × 10^{–7} | 2.7 × 10^{8} | 6.2 × 10^{–3} | 2.5 × 10^{3} | 3.8 × 10^{5} | |

Success rate | GTA | 100 | 0 | 100 | 100 | 100 | 100 | 100 | 0 | 100 | 12 | 100 | 100 | 100 | 100 |

PSO | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 18 | 0 | 0 | 0 | 0 | |

SA | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 20 | 0 | 0 | 0 | 0 | |

GA | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100 | 0 | 0 | 0 | 0 | |

HS | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 31 | 0 | 0 | 0 | 0 | |

$OF$ evaluations | GTA | 13,401 | 11,605 | 10,920 | 12,686 | 19,352 | 11,234 | 9983 | 14,314 | 10,764 | 1701 | 9273 | 6757 | 12,573 | 11,933 |

PSO | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 2200 | 3874 | 50,100 | 14,016 | 50,100 | 8064 | |

SA | 5562 | 5132 | 4272 | 8002 | 5142 | 5632 | 5532 | 5002 | 8342 | 7622 | 5352 | 6972 | 4002 | 3032 | |

GA | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 50,100 | 48,817 | |

HS | 19,956 | 18,340 | 20,824 | 20,365 | 19,457 | 19,251 | 20,315 | 17,935 | 5010 | 8965 | 19,430 | 19,385 | 18,770 | 8976 | |

Score | GTA | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 |

PSO | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |

SA | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | |

GA | 0.000 | 0.003 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 1.000 | 0.000 | 0.000 | 0.000 | 0.000 | |

HS | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |

Comparison | GTA vs PSO | GTA vs SA | GTA vs AG | GTA vs HS |
---|---|---|---|---|

GTA [%] | 95.65 | 94.96 | 92.86 | 94.54 |

Algorithm [%] | 4.12 | 4.74 | 6.29 | 5.06 |

Equal [%] | 0.24 | 0.31 | 0.86 | 0.39 |

$\mathit{p}$-value | 0 | 0 | 0 | 0 |

Function | Best | $\mathbf{Mean}\mathit{O}\mathit{F}$ | Standard Deviation | Sucess Rate | $\mathit{O}\mathit{F}\mathbf{Evaluations}$ |
---|---|---|---|---|---|

Sphere | 2.14 × 10^{–18} | 1.88 × 10^{–15} | 3.73 × 10^{–15} | 100 | 14,328 |

Rosenbrock | 1.99 × 10^{4} | 2.00 × 10^{4} | 2.53 × 10^{1} | 0 | 11,195 |

Rastrigin | 0.00 × 10^{0} | 0.00 × 10^{0} | 0.00 × 10^{0} | 100 | 10,488 |

Griewank | 0.00 × 10^{0} | 1.34 × 10^{–15} | 2.51 × 10^{–15} | 100 | 13,080 |

Alpine | 2.44 × 10^{–15} | 7.19 × 10^{–14} | 7.22 × 10^{–14} | 100 | 20,593 |

Brown | 1.84 × 10^{–18} | 2.29 × 10^{–15} | 5.34 × 10^{–15} | 100 | 12,007 |

Chung Reynolds | 1.69 × 10^{–24} | 2.52 × 10^{–17} | 1.32 × 10^{–16} | 100 | 10,947 |

Dixon Price | 1.00 × 10^{0} | 1.00 × 10^{0} | 1.20 × 10^{–8} | 0 | 16,782 |

Exponential | 0.00 × 10^{0} | 7.40 × 10^{–1} | 4.41 × 10^{–1} | 26 | 4132 |

Salomon | 2.96 × 10^{–10} | 7.95 × 10^{–7} | 1.76 × 10^{–6} | 12 | 1518 |

Schumer Steiglitz | 1.20 × 10^{–22} | 1.00 × 10^{–17} | 3.86 × 10^{–17} | 100 | 9657 |

Sum of Powers | 3.09 × 10^{–21} | 1.16 × 10^{–3} | 1.02 × 10^{–2} | 97 | 8052 |

Sum of Squares | 1.83 × 10^{–18} | 2.46 × 10^{–15} | 6.20 × 10^{–15} | 100 | 14,238 |

Zakharov | 5.92 × 10^{–19} | 2.11 × 10^{–15} | 4.47 × 10^{–15} | 100 | 13,075 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Meirelles, G.; Brentan, B.; Izquierdo, J.; Luvizotto, E., Jr.
Grand Tour Algorithm: Novel Swarm-Based Optimization for High-Dimensional Problems. *Processes* **2020**, *8*, 980.
https://doi.org/10.3390/pr8080980

**AMA Style**

Meirelles G, Brentan B, Izquierdo J, Luvizotto E Jr.
Grand Tour Algorithm: Novel Swarm-Based Optimization for High-Dimensional Problems. *Processes*. 2020; 8(8):980.
https://doi.org/10.3390/pr8080980

**Chicago/Turabian Style**

Meirelles, Gustavo, Bruno Brentan, Joaquín Izquierdo, and Edevar Luvizotto, Jr.
2020. "Grand Tour Algorithm: Novel Swarm-Based Optimization for High-Dimensional Problems" *Processes* 8, no. 8: 980.
https://doi.org/10.3390/pr8080980