# Parallel Improvements of the Jaya Optimization Algorithm

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Jaya Algorithm

## 3. Parallel Approaches

Algorithm 1 Skeleton of the Jaya algorithm | |

1: | Define function to minimize |

2: | Set $Runs$ parameter |

3: | Set $Iterations$ parameter |

4: | Set $PopulationSize$ parameter |

5: | for$l=1$ to $Runs$ do |

6: | Create New Population: |

7: | for $i=1$ to $PopulationSize$ do |

8: | for $j=1$ to $VARS$ do |

9: | Obtain 2 random numbers |

10: | Compute the design variable of the new member $Membe{r}_{j}^{i}$ {using Equation (1)} |

11: | if $Membe{r}_{j}^{i}<MinValue$ then |

12: | $Membe{r}_{j}^{i}=MinValue$ |

13: | end if |

14: | if $Membe{r}_{j}^{i}>MaxValue$ then |

15: | $Membe{r}_{j}^{i}=MaxValue$ |

16: | end if |

17: | end for |

18: | Compute and store $F(Membe{r}_{j}^{i})$ {Function evaluation} |

19: | end for |

20: | for $l=1$ to $Iterations$ do |

21: | Update Population |

22: | end for |

23: | Store Solution |

24: | Delete Population |

25: | end for |

26: | Obtain Best Solution and Statistical Data |

**c**available processes ($r=1,2,\dots ,c$). In Algorithm 3, ${\sum}_{r=1}^{c}Iteration{s}_{r}=Iterations$ must be satisfied, where $Iteration{s}_{r}$ is the number of population updates performed by process r, since a dynamic scheduling strategy has been used the number of populations updates per thread is not a fixed number. Since this algorithm has been designed for shared memory platforms, all solutions are stored in memory using OpenMP. Consequently, following parallel computation, the “sequential thread (or process)” obtains the best global solution and computes statistical values of all solutions obtained. As aforementioned, the number of iterations performed by each thread is not fixed, this number depends on the computational load assigned to each core in each particular execution, the automatic load balancing is implemented using the dynamic scheduling strategy of the OpenMP parallel loops. It should be noted that the total number of functions evaluations remains unchanged.

Algorithm 2 Update Population function of the Jaya algorithm | |

1: | Update Population: |

2: | { |

3: | {Obtain the current best and worst solution} |

4: | ${F}^{worst}=F(Me{m}^{1})$ |

5: | $Inde{x}^{worst}=1$ |

6: | ${F}^{best}=F(Me{m}^{1})$ |

7: | $Inde{x}^{best}=1$ |

8: | for$k=2$ to $PopulationSize$ do |

9: | if ${F}^{best}>F(Me{m}^{k})$ then |

10: | $Inde{x}^{best}=k$ |

11: | ${F}^{best}=F(Me{m}^{k})$ |

12: | end if |

13: | if ${F}^{worst}<F(Me{m}^{k})$ then |

14: | $Inde{x}^{worst}=k$ |

15: | ${F}^{worst}=F(Me{m}^{k})$ |

16: | end if |

17: | end for |

18: | for$k=1$ to $PopulationSize$ do |

19: | $old=k$ |

20: | for $j=1$ to $VARS$ do |

21: | Obtain 2 random numbers |

22: | Compute the design variable of the new member $New{M}_{j}$ {using Equation (1)} |

23: | Check the bounds of $New{M}_{j}$ |

24: | end for |

25: | Compute $F(NewM)$ {Function evaluation} |

26: | if $F(NewM)<F(Me{m}^{old})$ then |

27: | {Replace solution} |

28: | for $j=1$ to $VARS$ do |

29: | $Me{m}_{j}^{old}=New{M}_{j}$ |

30: | end for |

31: | end if |

32: | end for |

33: | {Search for current best and worst solution as in Lines 4–17} |

34: | } |

Algorithm 3 Skeleton of shared memory parallel Jaya algorithm. | |

1: | for$l=1$ to $Runs$ do |

2: | Parallel region: |

3: | { |

4: | Create New Population {Lines 7–19 of Algorithm 1} |

5: | parallel for $i=1$ to $Iterations$ do |

6: | Update Population |

7: | end for |

8: | Store Solution |

9: | Delete Population |

10: | } |

11: | end for |

12: | Sequential thread: |

13: | Obtain Best Solution and Statistical Data |

**p**available processes, taking into account, however, that it cannot be distributed statically. The high-level parallel algorithm must be designed for distributed memory platforms using MPI: on the one hand, we must to develop a load balance procedure; on the other hand, a final data gathering process (data collection from all processes) must be performed.

Algorithm 4 Update Population function of the shared memory parallel Jaya algorithm | |

1: | Update Population: |

2: | |

3: | Flush operation over population and best and worst indices |

4: | for$k=1$ to $PopulationSize$ do |

5: | $old=k$ |

6: | for $j=1$ to $VARS$ do |

7: | Obtain 2 random numbers |

8: | Compute the design variable of the new member $New{M}_{j}$ |

9: | if $New{M}_{j}<MinValue$ then |

10: | $New{M}_{j}=MinValue$ |

11: | end if |

12: | if $New{M}_{j}>MaxValue$ then |

13: | $New{M}_{j}=MaxValue$ |

14: | end if |

15: | end for |

16: | Compute $F(NewM)$ {Function evaluation} |

17: | if $F(NewM)<F(Me{m}^{old})$ then |

18: | Critical section to: |

19: | { |

20: | {Replace solution} |

21: | for $j=1$ to $VARS$ do |

22: | $Me{m}_{j}^{old}=New{M}_{j}$ |

23: | end for |

24: | } |

25: | if $F(NewM)<F(Me{m}^{best})$ then |

26: | Critical section to: |

27: | $Inde{x}^{Best}=i$ {$Me{m}^{best}=NewM$ |

28: | end if |

29: | if $Inde{x}_{worst}==k$ then |

30: | $FlagUpdateWorst=1$ |

31: | end if |

32: | end if |

33: | end for |

34: | if$FlagUpdateWorst==1$then |

35: | Flush operation over population |

36: | $FlagUpdateWorst=0$ |

37: | $F(Tem{p}^{worst})=F(Me{m}^{1})$ |

38: | $IndexTem{p}^{worst}=1$ |

39: | for $k=2$ to $PopulationSize$ do |

40: | if $F(Tem{p}^{worst})<F(Me{m}^{k})$ then |

41: | $IndexTem{p}^{worst}=k$ |

42: | end if |

43: | end for |

44: | Critical section to: |

45: | $Inde{x}^{worst}=IndexTem{p}^{worst}$ {$Me{m}^{worst}=Me{m}^{IndexTem{p}^{worst}}$} |

46: | end if |

Algorithm 5 Hybrid parallel Jaya algorithm for distributed shared memory platforms | |

1: | Define function to minimize |

2: | Set $Iterations$ parameter (input parameter) |

3: | Set population size (input parameter) |

4: | Obtain the number of distributed memory worker processes p (input parameter) |

5: | if is work dispatcher process then |

6: | for $l=1$ to $Runs$ do |

7: | Receive idle worker process signal |

8: | Send independent execution signal |

9: | end for |

10: | for $l=1$ to p do |

11: | Receive idle worker process signal |

12: | Send end of work signal |

13: | end for |

14: | else |

15: | while true do |

16: | Send idle worker process signal to dispatcher process |

17: | if Signal is equal to end of work signal then |

18: | Break while |

19: | else |

20: | Obtain the number of shared memory processes c |

21: | Compute 1 run of shared memory parallel Jaya algorithm |

22: | Store Solution |

23: | end if |

24: | end while |

25: | end if |

26: | Perform a gather operation to collect all the solutions |

27: | Sequential thread of the root process: |

28: | Obtain Best Solution and Statistical Data |

## 4. Results and Discussion

## 5. Conclusions

## Author Contributions

## Acknowledgments

## Conflicts of Interest

## References

- Lin, M.H.; Tsai, J.F.; Yu, C.S. A Review of Deterministic Optimization Methods in Engineering and Management. Math. Probl. Eng.
**2012**, 2012, 756023. [Google Scholar] [CrossRef] - Rao, R.V.; Savsani, V.; Vakharia, D. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des.
**2011**, 43, 303–315. [Google Scholar] [CrossRef] - Rao, R.V. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput.
**2016**, 7, 19–34. [Google Scholar] [CrossRef] - Singh, S.P.; Prakash, T.; Singh, V.; Babu, M.G. Analytic hierarchy process based automatic generation control of multi-area interconnected power system using Jaya algorithm. Eng. Appl. Artif. Intell.
**2017**, 60, 35–44. [Google Scholar] [CrossRef] - Mishra, S.; Ray, P.K. Power quality improvement using photovoltaic fed DSTATCOM based on JAYA optimization. IEEE Trans. Sustain. Energy
**2016**, 7, 1672–1680. [Google Scholar] [CrossRef] - Rao, R.; More, K. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm. Energy Convers. Manag.
**2017**, 140, 24–35. [Google Scholar] [CrossRef] - Rao, R.V.; Rai, D.P.; Balic, J. A multi-objective algorithm for optimization of modern machining processes. Eng. Appl. Artif. Intell.
**2017**, 61, 103–125. [Google Scholar] [CrossRef] - Abhishek, K.; Kumar, V.R.; Datta, S.; Mahapatra, S.S. Application of JAYA algorithm for the optimization of machining performance characteristics during the turning of CFRP (epoxy) composites: comparison with TLBO, GA, and ICA. Eng. Comput.
**2016**, 33, 457–475. [Google Scholar] [CrossRef] - Wang, S.H.; Phillips, P.; Dong, Z.C.; Zhang, Y.D. Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm. Neurocomputing
**2018**, 272, 668–676. [Google Scholar] [CrossRef] - Ghavidel, S.; Azizivahed, A.; Li, L. A hybrid Jaya algorithm for reliability–redundancy allocation problems. Eng. Optim.
**2018**, 50, 698–715. [Google Scholar] [CrossRef] - Wang, L.; Zhang, Z.; Huang, C.; Tsui, K.L. A GPU-accelerated parallel Jaya algorithm for efficiently estimating Li-ion battery model parameters. Appl. Soft Comput.
**2018**, 65, 12–20. [Google Scholar] [CrossRef] - Ocłoń, P.; Cisek, P.; Rerak, M.; Taler, D.; Rao, R.V.; Vallati, A.; Pilarczyk, M. Thermal performance optimization of the underground power cable system by using a modified Jaya algorithm. Int. J. Therm. Sci.
**2018**, 123, 162–180. [Google Scholar] [CrossRef] - Choudhary, A.; Kumar, M.; Unune, D.R. Investigating effects of resistance wire heating on AISI 1023 weldment characteristics during ASAW. Mater. Manuf. Process.
**2018**, 33, 759–769. [Google Scholar] [CrossRef] - Yu, K.; Liang, J.; Qu, B.; Chen, X.; Wang, H. Parameters identification of photovoltaic models using an improved JAYA optimization algorithm. Energy Convers. Manag.
**2017**, 150, 742–753. [Google Scholar] [CrossRef] - Gambhir, M.; Gupta, S. Advanced optimization algorithms for grating based sensors: A comparative analysis. Optik
**2018**, 164, 567–574. [Google Scholar] [CrossRef] - Dinh-Cong, D.; Dang-Trung, H.; Nguyen-Thoi, T. An efficient approach for optimal sensor placement and damage identification in laminated composite structures. Adv. Eng. Softw.
**2018**, 119, 48–59. [Google Scholar] [CrossRef] - Singh, P.; Dwivedi, P. Integration of new evolutionary approach with artificial neural network for solving short term load forecast problem. Appl. Energy
**2018**, 217, 537–549. [Google Scholar] [CrossRef] - Rao, R.V.; Saroj, A. Constrained economic optimization of shell-and-tube heat exchangers using elitist-Jaya algorithm. Energy
**2017**, 128, 785–800. [Google Scholar] [CrossRef] - Rao, R.V.; Saroj, A. A self-adaptive multi-population based Jaya algorithm for engineering optimization. Swarm Evolut. Comput.
**2017**, 37, 1–26. [Google Scholar] [CrossRef] - Rao, R.V.; Rai, D.P. Optimisation of welding processes using quasi-oppositional-based Jaya algorithm. J. Exp. Theor. Artif. Intell.
**2017**, 29, 1099–1117. [Google Scholar] [CrossRef] - Umbarkar, A.J.; Rothe, N.M.; Sathe, A. OpenMP Teaching-Learning Based Optimization Algorithm over Multi-Core System. Int. J. Intell. Syst. Appl.
**2015**, 7, 19–34. [Google Scholar] [CrossRef] - Umbarkar, A.J.; Joshi, M.S.; Sheth, P.D. OpenMP Dual Population Genetic Algorithm for Solving Constrained Optimization Problems. Int. J. Inf. Eng. Electron. Business
**2015**, 1, 59–65. [Google Scholar] [CrossRef] - Baños, R.; Ortega, J.; Gil, C. Comparing multicore implementations of evolutionary meta-heuristics for transportation problems. Ann. Multicore GPU Program.
**2014**, 1, 9–17. [Google Scholar] - Baños, R.; Ortega, J.; Gil, C. Hybrid MPI/OpenMP Parallel Evolutionary Algorithms for Vehicle Routing Problems. In Proceedings of the Applications of Evolutionary Computation: 17th European Conference (EvoApplications 2014), Granada, Spain, 23–25 April 2014; Revised Selected Papers. Esparcia-Alcázar, A.I., Mora, A.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 653–664. [Google Scholar]
- Delisle, P.; Krajecki, M.; Gravel, M.; Gagné, C. Parallel implementation of an ant colony optimization metaheuristic with OpenMP. In Proceedings of the 3rd European Workshop on OpenMP; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
- Tran, S.N.; d’Avila Garcez, A.S. Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks. IEEE Trans. Neural Netw. Learn. Syst.
**2018**, 29, 246–258. [Google Scholar] [CrossRef] [PubMed] - D’Avila Garcez, A.; Besold, T.R.; de Raedt, L.; Földiak, P.; Hitzler, P.; Icard, T.; Kühnberger, K.U.; Lamb, L.C.; Miikkulainen, R.; Silver, D.L. Neural-Symbolic Learning and Reasoning: Contributions and Challenges. In Proceedings of the AAAI Spring Symposium—Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches, Palo Alto, CA, USA, 23–25 March 2015. [Google Scholar] [CrossRef]
- Kamsu-Foguem, B.; Rigal, F.; Mauget, F. Mining association rules for the quality improvement of the production process. Expert Syst. Appl.
**2013**, 40, 1034–1045. [Google Scholar] [CrossRef] [Green Version] - Ruiz, P.P.; Foguem, B.K.; Grabot, B. Generating knowledge in maintenance from Experience Feedback. Knowl.-Based Syst.
**2014**, 68, 4–20. [Google Scholar] [CrossRef] [Green Version] - Traore, B.B.; Kamsu-Foguem, B.; Tangara, F. Data mining techniques on satellite images for discovery of risk areas. Expert Syst. Appl.
**2017**, 72, 443–456. [Google Scholar] [CrossRef] - Chen, T.; Huang, J. Application of data mining in a global optimization algorithm. Adv. Eng. Softw.
**2013**, 66, 24–33. [Google Scholar] [CrossRef] - Rao, R.V.; Waghmare, G. A new optimization algorithm for solving complex constrained design optimization problems. Eng. Optim.
**2017**, 49, 60–83. [Google Scholar] [CrossRef] - Kurada, R.R.; Kanadam, K.P. Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm. Adv. Comput. Intell. Int. J.
**2016**, 3, 35–42. [Google Scholar] [CrossRef] - Free Software Foundation, Inc. GCC, the GNU Compiler Collection. Available online: https://www.gnu.org/software/gcc/index.html (accessed on 2 November 2016).
- MPI Forum. MPI: A Message-Passing Interface Standard. Version 2.2. 2009. Available online: http://www.mpi-forum.org (accessed on 15 December 2016).
- OpenMP Architecture Review Board. OpenMP Application Program Interface. Version 3.1. 2011. Available online: http://www.openmp.org (accessed on 2 November 2016).
- Michailidis, P.D. An efficient multi-core implementation of the Jaya optimisation algorithm. Int. J. Parallel Emerg. Distrib. Syst.
**2017**, 1–33. [Google Scholar] [CrossRef]

**Figure 2.**Shared memory parallel Jaya algorithm. Iterations = 30,000, Population = 512. (

**a**) Speed-up with respect to the sequential execution. (

**b**) Efficiency of the parallel algorithm.

**Figure 4.**Efficiency of shared memory parallel Jaya algorithm. OpenMP processes = 6. Iterations = 30,000.

**Figure 5.**Hybrid parallel Jaya algorithm. Iterations = $\mathrm{30,000}$. Population = 256. Runs = 30. (

**a**) Efficiency of the parallel algorithm. (

**b**) Speed-up with respect to the sequential execution.

ID. | Function | VAR |
---|---|---|

F1 | Sphere | 30 |

F2 | SumSquares | 30 |

F3 | Beale | 5 |

F4 | Easom | 2 |

F5 | Matyas | 2 |

F6 | Colville | 4 |

F7 | Trid 6 | 6 |

F8 | Trid 10 | 10 |

F9 | Zakharov | 10 |

F10 | Schwefel 1.2 | 30 |

F11 | Rosenbrock | 30 |

F12 | Dixon-Proce | 30 |

F13 | Foxholes | 2 |

F14 | Branin | 2 |

F15 | Bohachevsky 1 | 2 |

F16 | Booth | 2 |

F17 | Michalewicz 2 | 2 |

F18 | Michalewicz 5 | 5 |

F19 | Bohachevsky 2 | 2 |

F20 | Bohachevsky 3 | 2 |

F21 | GoldStein-Price | 2 |

F22 | Perm | 4 |

F23 | Hartman 3 | 3 |

F24 | Ackley | 30 |

F25 | Penalized 2 | 30 |

F26 | Langerman 2 | 2 |

F27 | Langerman 5 | 5 |

F28 | Langerman 10 | 10 |

F29 | FletcherPowell 5 | 5 |

F30 | FletcherPowell 10 | 10 |

**Table 2.**Sequential and parallel Jaya results. Population = 256. Iterations = 30,000. 10 MPI processes. 6 OpenMP processes. Runs = 30.

Function | Sequential Time (s) | Parallel Time (s) | Speed-Up | Efficiency |
---|---|---|---|---|

F1 | 126.0 | 2.58 | 48.9 | 81% |

F2 | 129.9 | 2.67 | 48.6 | 81% |

F3 | 108.5 | 2.02 | 53.7 | 90% |

F4 | 26.7 | 0.50 | 53.7 | 90% |

F5 | 70.2 | 1.42 | 49.3 | 82% |

F6 | 18.4 | 0.41 | 44.6 | 74% |

F7 | 26.6 | 0.53 | 50.0 | 83% |

F8 | 44.7 | 0.87 | 51.6 | 86% |

F9 | 67.7 | 1.55 | 43.7 | 73% |

F10 | 254.9 | 4.85 | 52.6 | 88% |

F11 | 131.9 | 2.46 | 53.7 | 90% |

F12 | 132.4 | 2.44 | 54.2 | 90% |

F13 | 999.9 | 17.59 | 56.8 | 95% |

F14 | 16.3 | 0.33 | 49.9 | 83% |

F15 | 17.3 | 0.34 | 51.0 | 85% |

F16 | 9.4 | 0.23 | 41.4 | 69% |

F17 | 54.9 | 1.05 | 52.5 | 87% |

F18 | 171.8 | 3.15 | 54.5 | 91% |

F19 | 12.4 | 0.26 | 48.2 | 80% |

F20 | 16.2 | 0.32 | 51.1 | 85% |

F21 | 12.0 | 0.27 | 44.3 | 74% |

F22 | 330.3 | 5.74 | 57.6 | 96% |

F23 | 45.5 | 0.81 | 56.1 | 94% |

F24 | 465.6 | 8.21 | 56.7 | 95% |

F25 | 583.8 | 10.25 | 56.9 | 95% |

F26 | 82.9 | 1.54 | 53.7 | 90% |

F27 | 474.4 | 8.57 | 55.3 | 92% |

F28 | 1999.5 | 34.80 | 57.5 | 96% |

F29 | 362.1 | 6.44 | 56.2 | 94% |

F30 | 1471.8 | 25.81 | 57.0 | 95% |

**Table 3.**Sequential and parallel Jaya results. Population = 256. Iterations = 30,000. 10 MPI processes. 2 OpenMP processes. Runs = 30.

Function | Sequential Time (s) | Parallel Time (s) | Speed-Up | Efficiency |
---|---|---|---|---|

F6 | 18.5 | 0.97 | 19.2 | 96% |

F7 | 24.4 | 1.46 | 16.7 | 84% |

F14 | 16.7 | 0.89 | 18.8 | 94% |

F15 | 17.2 | 0.88 | 19.6 | 98% |

F16 | 9.0 | 0.55 | 16.3 | 82% |

F19 | 12.0 | 0.64 | 18.8 | 94% |

F20 | 15.9 | 0.87 | 18.2 | 91% |

F21 | 11.9 | 0.61 | 19.5 | 98% |

**Table 4.**Sequential and parallel Jaya solutions. Population = 64. Iterations = 1000. 5 MPI processes. Runs = 30.

Function | Optimum | OpenMP Processes | Best Parallel | Best Sequential | OpenMP Processes | Best Parallel | Best Sequential |
---|---|---|---|---|---|---|---|

F1 | 0.00000 | 2 | 0.00030 | 0.00163 | 6 | 0.00096 | 0.00233 |

F2 | 0.00000 | 2 | 0.00006 | 0.00018 | 6 | 0.00019 | 0.00048 |

F3 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F4 | −1.00000 | 2 | −1.00000 | −1.00000 | 6 | −1.00000 | −1.00000 |

F5 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F6 | 0.00000 | 2 | 0.00008 | 0.00021 | 6 | 0.00022 | 0.00018 |

F7 | −50.00000 | 2 | −50.00000 | −50.00000 | 6 | −50.00000 | −50.00000 |

F8 | −210.00000 | 2 | −210.00000 | −210.00000 | 6 | −210.00000 | −210.00000 |

F9 | 0.00000 | 2 | 0.00017 | 0.00027 | 6 | 0.00003 | 0.00036 |

F10 | 0.00000 | 2 | 0.00002 | 0.00033 | 6 | 0.00026 | 0.00054 |

F11 | 0.00000 | 2 | 7.71880 | 24.31500 | 6 | 13.37600 | 33.43700 |

F12 | 0.00000 | 2 | 0.67569 | 0.69369 | 6 | 0.67506 | 0.72603 |

F13 | 0.99800 | 2 | 357.46000 | 498.07000 | 6 | 10.73000 | 200.70000 |

F14 | 0.39800 | 2 | 0.39789 | 0.39789 | 6 | 0.39789 | 0.39789 |

F15 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F16 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F17 | −1.80130 | 2 | −1.80130 | −1.80130 | 6 | −1.80130 | −1.80130 |

F18 | −4.68770 | 2 | −4.68770 | −4.68770 | 6 | −4.68770 | −4.68770 |

F19 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F20 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F21 | 3.00000 | 2 | 3.00000 | 3.00000 | 6 | 3.00000 | 3.00000 |

F22 | 0.00000 | 2 | 0.00343 | 0.00479 | 6 | 0.00175 | 0.00271 |

F23 | −3.86000 | 2 | −3.86280 | −3.86280 | 6 | −3.86280 | −3.86280 |

F24 | 0.00000 | 2 | 0.00683 | 0.02936 | 6 | 0.02452 | 0.04375 |

F25 | 0.00000 | 2 | 0.00018 | 0.00107 | 6 | 0.00025 | 0.00113 |

F26 | −4.15580 | 2 | −4.15580 | −4.15580 | 6 | −4.15580 | −4.15580 |

F27 | −3.34260 | 2 | −3.31040 | −3.30630 | 6 | −3.31760 | −3.30780 |

F28 | −3.15430 | 2 | −3.06370 | −3.06080 | 6 | −3.08460 | −3.04290 |

F29 | 0.00000 | 2 | 0.00001 | 0.00005 | 6 | 0.00002 | 0.00003 |

F30 | 0.00000 | 2 | 0.00003 | 0.00086 | 6 | 0.00025 | 0.00098 |

**Table 5.**Sequential and parallel Jaya solutions. Population = 64. Iterations = 3000. 5 MPI processes. Runs = 30.

Function | Optimum | OpenMP Processes | Best Parallel | Best Sequential | OpenMP Processes | Best Parallel | Best Sequential |
---|---|---|---|---|---|---|---|

F1 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F2 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F3 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F4 | −1.00000 | 2 | −1.00000 | −1.00000 | 6 | −1.00000 | −1.00000 |

F5 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F6 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F7 | −50.00000 | 2 | −50.00000 | −50.00000 | 6 | −50.00000 | −50.00000 |

F8 | −210.00000 | 2 | −210.00000 | −210.00000 | 6 | −210.00000 | −210.00000 |

F9 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F10 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F11 | 0.00000 | 2 | 0.00010 | 0.00751 | 6 | 0.04423 | 0.07421 |

F12 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F13 | 0.99800 | 2 | 12.67200 | 25.28900 | 6 | 1.03040 | 36.41000 |

F14 | 0.39800 | 2 | 0.39789 | 0.39789 | 6 | 0.39789 | 0.39789 |

F15 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F16 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F17 | −1.80130 | 2 | −1.80130 | −1.80130 | 6 | −1.80130 | −1.80130 |

F18 | −4.68770 | 2 | −4.68770 | −4.68770 | 6 | −4.68770 | −4.68770 |

F19 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F20 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F21 | 3.00000 | 2 | 3.00000 | 3.00000 | 6 | 3.00000 | 3.00000 |

F22 | 0.00000 | 2 | 0.00063 | 0.00206 | 6 | 0.00076 | 0.00141 |

F23 | −3.86000 | 2 | −3.86280 | −3.86280 | 6 | −3.86280 | −3.86280 |

F24 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F25 | 0.00000 | 2 | 0.00000 | 0.00000 | 6 | 0.00000 | 0.00000 |

F26 | −4.15580 | 2 | −4.15580 | −4.15580 | 6 | −4.15580 | −4.15580 |

F27 | −3.34260 | 2 | −3.34260 | −3.34260 | 6 | −3.34260 | −3.34260 |

F28 | −3.15430 | 2 | −3.15340 | −3.15340 | 6 | −3.15240 | −3.15280 |

F29 | 0.00000 | 2 | 0.00008 | 0.00014 | 6 | 0.00000 | 0.00001 |

F30 | 0.00000 | 2 | 0.00000 | 0.00023 | 6 | 0.00033 | 0.00050 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Migallón, H.; Jimeno-Morenilla, A.; Sanchez-Romero, J.-L.
Parallel Improvements of the Jaya Optimization Algorithm. *Appl. Sci.* **2018**, *8*, 819.
https://doi.org/10.3390/app8050819

**AMA Style**

Migallón H, Jimeno-Morenilla A, Sanchez-Romero J-L.
Parallel Improvements of the Jaya Optimization Algorithm. *Applied Sciences*. 2018; 8(5):819.
https://doi.org/10.3390/app8050819

**Chicago/Turabian Style**

Migallón, Héctor, Antonio Jimeno-Morenilla, and Jose-Luis Sanchez-Romero.
2018. "Parallel Improvements of the Jaya Optimization Algorithm" *Applied Sciences* 8, no. 5: 819.
https://doi.org/10.3390/app8050819