# Execution Time Decrease for Controllers Based on Adaptive Particle Swarm Optimization

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

- RHC is a control structure that uses a process model (PM).

- The OCP’s solution is a closed-loop solution, not a sequence of control output values. The entire approach may be described by the 4-tuple RHC–Controller–Predictor–Metaheuristic.
- The closed-loop has an RHC structure, which includes a PM.
- An optimization module uses a metaheuristic algorithm to make predictions.
- The metaheuristics use a quasi-optimal control profile determined offline before controlling the real system. The control structure employs this control profile to shrink the control ranges, wherein the control outputs take values.
- The controller adapts the control output ranges at each sampling period, and calls the prediction module (Predictor) so that the Predictor will be more efficient in finding the best prediction.
- The present work’s novelties beyond those of the paper [26] are mentioned hereafter.
- The controller sets the control ranges resulting from the control profile and adapts them to the prediction moment. In addition, the Predictor tunes the control ranges; i.e., adjusts the intervals’ length wherein the control outputs take values. The tuning aims to progressively increase the control ranges’ size, not to lose convergence of the APSOA when control ranges are too small.

## 2. Control Horizon Discretizationand Predictions

## 3. Predefined Control Profile

_{i}and the blue zone from Figure 1 forms a blue rectangle referred to as the control range.

**Remark**

**1:**

**Remark**

**2:**

_{k}= R

^{1}(k) R

^{2}(k) … R

^{m}(k).

**Remark**

**3:**

## 4. Predictions Based on Adaptive Particle Swarm Optimization Algorithm

#### 4.1. Process Model, Constraints, and Performance Index

#### 4.1.1. Process Model

^{2}/s);

#### 4.1.2. Constraints

_{0}and t

_{f}the initial and final moments of the control horizon.

_{0}≤ t ≤ t

_{f}, where t

_{0}= 0; t

_{f}=120 h

_{m}≤ q(t) ≤ q

_{M}, with t

_{0}≤ t ≤ t

_{f}.

_{0}is the minimal newly produced biomass. Equivalently, it holds

#### 4.1.3. Performance Index

#### 4.2. Prediction-Based Controller Structure

## 5. Execution Time Decrease

#### 5.1. Implementation of the Predefined Control Profile

mode = 1: | The control ranges are not used |

mode = 2: | The closed-loop control adapts the control ranges |

mode = 3: | The closed-loop control adapts and tunes the control ranges |

_{k}= R

^{1}(k) = [xm(k) xM(k)], k = 0,…,H − 1.

- It keeps all the parts described in [23] concerning the adaptive behavior and movement equations. These parts are generically recalled (using the character #) to simplify the presentation, but they can be easily accessed.
- The algorithm’s new parts focus on the implementation of execution time decrease. Hence, the pseudo-code details the actions implementing the three modes of using control ranges.

Function APSOA (k, h, x0, xmh, xMh, mode) | |
---|---|

Input parameters: k—the current discrete moment; x0—the current state (biomass concentration); h—predicted sequence length (particle’s positions X_{i} has h elements)xmh: vector with h elements—minimum value for each control range xMh: vector with h elements—maximum value for each control range | |

1 | #General initializations; /*space reservation for each particle*/ |

2 | If (mode=3) /*CR and tuning*/ |

3 | #initialization step0 and $\Delta \mathrm{step}$ |

4 | #initialization a /* e.g., 1/3, 1/4, 1/5,…*/ |

5 | $\Delta \mathrm{m}\left(\mathrm{i}\right)\leftarrow \mathrm{a}\cdot \left(\mathrm{xmh}\right(\mathrm{i})-\mathrm{qmin});\mathrm{i}=1,\cdots ,\mathrm{h}$ /* decrement step*/ |

6 | $\Delta \mathrm{M}\left(\mathrm{i}\right)\leftarrow \mathrm{a}\cdot (\mathrm{qmax}-\mathrm{xMh}(\mathrm{i}\left)\right);\mathrm{i}=1,\cdots ,\mathrm{h}$ /* increment step*/ |

7 | end |

8 | # Set the particles’ initial velocities, v(i,d), and positions $\mathrm{x}(\mathrm{i},\mathrm{d}),\mathrm{i}=1,\dots ,\mathrm{N};\mathrm{d}=1,\dots ,\mathrm{h}$. |

9 | # For each particle, compute the best performance using the EvalFitnessJ function. |

10 | # Determine the position, Pgbest, and the value, GBEST, of the global best particle. |

11 | found$\leftarrow $0; /* found =1 indicates the convergence of the algorithm*/ |

12 | step$\leftarrow $1; |

13 | while (step < = stepM) & (found = 0) |

14 | /* stepM is the accepted maximum number of steps until convergence.*/ |

15 | # Modify the coefficients that adapt the particles’ speed. |

16 | if (mode = 3) and (step > = step0) /*tuning of the Control Ranges */ |

17 | for i = 1:h |

18 | xmh(i) = xmh(i)-$\Delta \mathrm{m}\left(\mathrm{i}\right)$ |

19 | If (xmh(i) < qmin) |

20 | xmh(i) = qmin |

21 | end |

22 | xMh(i) = xMh(i) + $\Delta \mathrm{M}\left(\mathrm{i}\right)$ |

23 | if (xMh(i) > qmax) |

24 | xMh(i) = qmax |

25 | end |

26 | end |

27 | step0$\leftarrow $step0 + $\Delta $step |

28 | end |

29 | for i = 1,…,N |

30 | #Compute the best local performance of particle i. |

31 | for d = 1,…,n |

32 | #Update the particle’s speed |

33 | #Speed limitation |

34 | #Update the particle’s position |

35 | if x(i,d) > xMh(d) |

36 | x(i,d)$\leftarrow $xMh(d) |

37 | v(i,d)$\leftarrow $-v(i,d) |

38 | elseif x(i,d) < xmh(d) |

39 | x(i,d)$\leftarrow $xmh(d) |

40 | v(i,d)$\leftarrow $-v(i,d) |

41 | end |

42 | end /*for d*/ |

43 | #Compute fitness(i) and update the best performance of particle #i |

44 | #Update Pgbest, GBEST, and found |

45 | end /*for i*/ |

46 | step$\leftarrow $step + 1; |

47 | end /*while*/ |

48 | return Pgbest |

49 | end |

#### 5.2. Tuning of Control Ranges

**Remark**

**4:**

## 6. Simulation Results and Discussion

- To implement the closed-loop structure using the algorithms mentioned above.
- To implement the three algorithms, CONTROLLER, PREDICTOR, and APSOA, to cope with the three modes of using the control profile.
- To confirm that the proposed technique works properly and decreases the execution time.

- The closed loop does not work identically with the open loop. We recall that the reference CP can be assimilated to an open-loop solution of the OCP at hand.
- The APSOA is a stochastic algorithm that finds quasi-optimal solutions not identical to the reference CP.

#### 6.1. Execution Time Evaluation

_{1}and n

_{2}represent the durations of the elementary actions (assignments, tests, etc., except the objective function calls) included in the previously mentioned parts, expressed in time units. D is the duration of the objective function (EvalFitnessJ) execution, which in this context is practically constant. Nsteps denotes the step number until stop-criterion fulfilment. Using the APSOA in our computation context has a particularity: the process integration needs much greater execution time than the other parts of the algorithm. Because we have ${n}_{1},{n}_{2}<<N\cdot D$, it holds:

#### 6.2. Simulation without Range Adaptation

#### 6.3. Simulation of Closed-Loop Working with Control Ranges Adaptation

**Remark**

**5:**

- The controller with mode = 2 works properly, as in mode = 1, but faster. All the constraints are fulfilled.
- The controller with control range adaptation decreased the execution time because Ncalls diminished by 24.5% compared to the controller with mode = 1.

#### 6.4. Simulation of Closed-Loop Working with Control Range Adaptation and Tuning

**Remark**

**6:**

- The controller with mode = 3 works properly, as in mode = 1, but faster. All the constraints are fulfilled.
- The controller with control range adaptation decreased the execution time, because Ncalls diminished by 27.1% compared to the controller with mode = 1.

## 7. Conclusions

- The APSOA was used as the metaheuristic generating the optimal predictions.
- A new technique, the tuning of control ranges, was proposed and integrated into the controller.

- The APSOA was modified to include actions necessary to implement control ranges adaptation and tuning.
- Besides the new APSOA, the modules CONTROLLER and PREDICTOR were implemented to work according to three use modes: without control range adaptation, with control range adaptation, and with control range adaptation and tuning.
- A general simulation program was implemented, and three simulation series were carried out for each mode.

## Supplementary Materials

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Appendix A

radiative model | ${E}_{a}$ = 172 m^{2}·kg^{−1} | absorption coefficient |

${E}_{s}$ = 870 m^{2}·kg^{−1} | scattering coefficient | |

$b$ = 0.0008 | backward scattering fraction | |

kinetic model | ${\mu}_{max}$= 0.16 h^{−1} | specific growth rate |

${\mu}_{d}$ = 0.013 h^{−1} | specific decay rate | |

${K}_{S}$ = 120 µmol·m^{−2}·s^{−1} | saturation constant | |

${K}_{I}$ = 2500 µmol·m^{−2}·s^{−1} | inhibition constant | |

physical parameters | $V$ = 1.45·10^{−3} m^{3} | the volume of the PBR |

L = 0.04 m | depth of the PBR | |

A = 3.75·10^{−2} m^{2} | lighted surface | |

${x}_{0}$ = 0.36 g/L | the initial biomass concentration | |

other constants | C = 3600·10^{−2} | light intensity conversion constant |

${k}_{L}$ = 100 | number of discretization points | |

${q}_{m}=50{\mathsf{\mu}\mathrm{mol}/\mathrm{m}}^{2}/\mathrm{s}$ | lower technological light intensity | |

${q}_{M}=2000{\mathsf{\mu}\mathrm{mol}/\mathrm{m}}^{2}/\mathrm{s}$ | upper technological light intensity | |

${m}_{0}$ = 3 g. | the minimal final biomass |

## Appendix B

Closed-Loop Simulation | |

start /* k—the current discrete moment */ | |

1 | #Initializations; /*concerning the global constants and the mode = 1 or 2 or 3 */ |

2 | INIT_CONST /* Initialize the constants from Table A1 |

3 | H$\leftarrow $tfinal/T; |

4 | Ncalls_C$\leftarrow $0; /* The cumulated numbers of calls along the control horizon */ |

5 | state (0)$\leftarrow $x0; |

6 | k$\leftarrow $0; /*sampling moment counter */ |

7 | while k <= H−1 |

8 | CONTROLLER (k, x0, mode) |

9 | uRHC(k)$\leftarrow $U^{*}(k) |

10 | Ncalls_C$\leftarrow $Ncalls_C + Ncalls; |

11 | xnext$\leftarrow $RealProcessStep(U^{*}(k), x0, k); |

12 | x0$\leftarrow $xnext; |

13 | state(k + 1)$\leftarrow $x0 |

14 | k$\leftarrow $k + 1; |

15 | end /*while*/ |

16 | # Final integration of the PM using the optimal sequence uRHC |

17 | # Display the simulation results |

18 | end |

^{*}(k), the next state is calculated by the procedure “RealProcessStep”.

## Appendix C

- “INV_PSO_RHC_without_CR.m” for mode = 1,
- “INV_PSO_RHCwithCR.m” for mode = 2,
- “INV_PSO_RHCwithCRandT.m” for mode = 3.
- The function “RealProcessStep” is implemented by the script “INV_RHC_RealProcessStep.m”. All files are inside the folder “Inventions-2MTLB”.

#### Appendix C.1. Simulation without Control Ranges

- The closed-loop algorithm without control ranges is implemented by the script “INV_PSO_RHC_without_CR.m”. It can be executed alone or 30 times by the script “Loop30_PSO_without_CR.m”. In the last case, the results have been stored in the file “WSP30_without CR.mat”.

- “MEDIERE30loop_without_CR.m”, which also creates the file
- “WSPwithoutCR_i0.mat”.

- The script “DRAWfigWithoutCR.m” uses the latter to plot Figure 4a,b.

#### Appendix C.2. Simulation with Control Ranges

- The closed-loop algorithm with control ranges is implemented by the script “INV_PSO_RHCwithCR.m”. It can be executed alone or 30 times by the script “loop30_PSO_Predictor2.m”. In the last case, the results have been stored in the file “WSP30_CR.mat”.
- Then the script “Integration_CR_i0.m” will create data characterizing the typical execution stored in the file “WSP_CR_i0.mat”.
- The script “DRAWfigWithCR.m” uses the latter to plot Figure 5a,b.

#### Appendix C.3. Simulation with Control Ranges and Tuning

- The closed-loop algorithm with control ranges is implemented by the script “INV_PSO_RHCwithCRandT.m”. It can be executed alone or 30 times by the script “loop30_PSO_Predictor3.m”. In the last case, the results have been stored in the file “WSP30_CRandT.mat”.
- Then the script “Integration_CRandT_i0.m” will create data characterizing the typical execution stored in the file “WSP_CRandT_i0.mat”.
- The script “DRAWfigWithCRandT.m” uses the latter to plot Figure 6a,b.

## References

- Valadi, J.; Siarry, P. Applications of Metaheuristics in Process Engineering; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 1–39. [Google Scholar] [CrossRef]
- Faber, R.; Jockenhövelb, T.; Tsatsaronis, G. Dynamic optimization with simulated annealing. Comput. Chem. Eng.
**2005**, 29, 273–290. [Google Scholar] [CrossRef] - Onwubolu, G.; Babu, B.V. New Optimization Techniques in Engineering; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
- Minzu, V.; Riahi, S.; Rusu, E. Optimal control of an ultraviolet water disinfection system. Appl. Sci.
**2021**, 11, 2638. [Google Scholar] [CrossRef] - Banga, J.R.; Balsa-Canto, E.; Moles, C.G.; Alonso, A. Dynamic optimization of bioprocesses: Efficient and robust numerical strategies. J. Biotechnol.
**2005**, 117, 407–419. [Google Scholar] [CrossRef] [PubMed] - Talbi, E.G. Metaheuristics—From Design to Implementation; Wiley: Hoboken, NJ, USA, 2009; ISBN 978-0-470-27858-1. [Google Scholar]
- Siarry, P. Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2016; ISBN 978-3-319-45403-0. [Google Scholar]
- Kruse, R.; Borgelt, C.; Braune, C.; Mostaghim, S.; Steinbrecher, M. Computational Intelligence—A Methodological Introduction, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef]
- Mayne, D.Q.; Michalska, H. Receding Horizon Control of Nonlinear Systems. IEEE Trans. Autom. Control.
**1990**, 35, 814–824. [Google Scholar] [CrossRef] - Goggos, V.; King, R. Evolutionary predictive control. Comput. Chem. Eng.
**1996**, 20 (Suppl. S2), S817–S822. [Google Scholar] [CrossRef] - Hu, X.B.; Chen, W.H. Genetic algorithm based on receding horizon control for arrival sequencing and scheduling. Eng. Appl. Artif. Intell.
**2005**, 18, 633–642. [Google Scholar] [CrossRef] [Green Version] - Hu, X.B.; Chen, W.H. Genetic algorithm based on receding horizon control for real-time implementations in dynamic environments. In Proceedings of the 16th Triennial World Congress, Prague, Czech Republic, 4–8 July 2005; Elsevier IFAC Publications: Amsterdam, The Netherlands, 2005. [Google Scholar]
- Minzu, V.; Serbencu, A. Systematic procedure for optimal controller implementation using metaheuristic algorithms. Intell. Autom. Soft Comput.
**2020**, 26, 663–677. [Google Scholar] [CrossRef] - Chiang, P.-K.; Willems, P. Combine Evolutionary Optimization with Model Predictive Control in Real-time Flood Control of a River System. Water Resour. Manag.
**2015**, 29, 2527–2542. [Google Scholar] [CrossRef] - Minzu, V. Quasi-optimal character of metaheuristic-based algorithms used in closed-loop—Evaluation through simulation series. In Proceedings of the ISEEE, Galati, Romania, 18–20 October 2019. [Google Scholar]
- Abraham, A.; Jain, L.; Goldberg, R. Evolutionary Multiobjective Optimization—Theoretical Advances and Applications; Springer: Berlin/Heidelberg, Germany, 2005; ISBN 1-85233-787-7. [Google Scholar]
- Minzu, V. Optimal Control Implementation with Terminal Penalty Using Metaheuristic Algorithms. Automation
**2020**, 1, 48–65. [Google Scholar] [CrossRef] - Vlassis, N.; Littman, M.L.; Barber, D. On the computational complexity of stochastic controller optimization in POMDPs. ACM Trans. Comput. Theory
**2011**, 4, 1–8. [Google Scholar] [CrossRef] [Green Version] - de Campos, C.P.; Stamoulis, G.; Weyland, D. The computational complexity of Stochastic Optimization. In Combinatorial Optimization; Fouilhoux, P., Gouveia, L., Mahjoub, A., Paschos, V., Eds.; ISCO. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2014; Volume 8596. [Google Scholar] [CrossRef] [Green Version]
- Sohail, M.S.; Saeed, M.O.; Rizvi, S.Z.; Shoaib, M.; Sheikh, A.U. Low-Complexity Particle Swarm Optimization for Time-Critical Applications. arXiv
**2014**. [Google Scholar] [CrossRef] - Chopara, A.; Kaur, M. Analysis of Performance of Particle Swarm Optimization with Varied Inertia Weight Values for solving Travelling Salesman Problem. Int. J. Hybrid Inf. Technol.
**2016**, 9, 165–172. [Google Scholar] [CrossRef] - Sethi, A.; Kataria, D. Analyzing Emergent Complexity in Particle Swarm Optimization using a Rolling Technique for Updating Hyperparameter Coefficients. Procedia Comput. Sci.
**2021**, 193, 513–523. [Google Scholar] [CrossRef] - Minzu, V.; Ifrim, G.; Arama, I. Control of Microalgae Growth in Artificially Lighted Photobioreactors Using Metaheuristic-Based Predictions. Sensors
**2021**, 21, 8065. [Google Scholar] [CrossRef] [PubMed] - Minzu, V.; Arama, I. Optimal Control Systems Using Evolutionary Algorithm-Control Input Range Estimation. Automation
**2022**, 3, 95–115. [Google Scholar] [CrossRef] - Minzu, V.; Riahi, S.; Rusu, E. Implementation aspects regarding closed-loop control systems using evolutionary algorithms. Inventions
**2021**, 6, 53. [Google Scholar] [CrossRef] - Minzu, V.; Georgescu, L.; Rusu, E. Predictions Based on Evolutionary Algorithms Using Predefined Control Profiles. Electronics
**2022**, 11, 1682. [Google Scholar] [CrossRef] - Kennedy, J.; Eberhart, R.; Shi, Y. Swarm Intelligence; Morgan Kaufmann Academic Press: Cambridge, MA, USA, 2001. [Google Scholar]
- Beheshti, Z.; Shamsuddin, S.M.; Hasan, S. Memetic binary particle swarm optimization for discrete optimization problems. Inf. Sci.
**2015**, 299, 58–84. [Google Scholar] [CrossRef] - Maurice, C. L’Optimisation par Essaims Particulaires-Versions Paramétriques et Adaptatives; Hermes Lavoisier: Paris, France, 2005. [Google Scholar]
- Minzu, V.; Barbu, M.; Nichita, C. A Binary Hybrid Topology Particle Swarm Optimization Algorithm for Sewer Network Discharge. In Proceedings of the 19th International Conference on System Theory, Control and Computing (ICSTCC), Cheile Gradistei, Romania, 14–16 October 2015; pp. 627–634, ISBN 9781479984800. [Google Scholar]
- Tebbani, S.; Titica, M.; Ifrim, G.; Caraman, S. Control of the Light-to-Microalgae Ratio in a Photobioreactor. In Proceedings of the 18th International Conference on System Theory, Control and Computing, ICSTCC, Sinaia, Romania, 17–19 October 2014; pp. 393–398. [Google Scholar]
- Beheshti, Z.; Shamsuddin, S.M. Non-parametric particle swarm optimization for global optimization. Appl. Soft Comput.
**2015**, 28, 345–359. [Google Scholar] [CrossRef]

**Figure 4.**Typical closed-loop evolution without control range adaptation (mode = 1). (

**a**) The control profile without range adaptation. (

**b**) The state trajectory of the typical closed-loop evolution.

**Figure 5.**Typical closed-loop evolution with control range adaptation (mode = 2). (

**a**) The control profile with range adaptation. (

**b**) The state trajectory of the closed-loop typical evolution.

**Figure 6.**Typical closed-loop evolution with control range adaptation and tuning (mode = 3). (

**a**) The control profile with control range adaptation and tuning. (

**b**) The state trajectory of the typical evolution of the closed loop.

Function CONTROLLER(k, mode) | |
---|---|

/* for simulation: CONTROLLER(k,X0,mode) */ | |

/* k—the current discrete moment */ | |

1 | #Initializations; /*concerning the global constants*/ |

2 | # Obtain the current state vector X(k) /*In simulation, X0 is used instead */ |

3 | If (mode = 1) /*CR and tuning*/ |

4 | xm(i) = qmin; i = 1,…,H. /* qmin is the lower technological limit */ |

5 | xM(i) = qmax; i = 1,…,H. /* qmax is the upper technological limit */ |

6 | else /* mode = 2 or 3 */ |

7 | xm = (1−p)·Uref; |

8 | xM = (1 + p)·Uref |

9 | # Truncate xm and XM not to overpass the technological limits. |

10 | end |

11 | ocs(k)$\leftarrow $PREDICTOR(k,X(k),mode) |

12 | U^{*}(k)$\leftarrow $the first element of ocs(k) |

13 | # Send U^{*}(k) toward the process /*the current optimal control value */ |

14 | return /* or wait for the next sampling period */ |

Function PREDICTOR(k, x0, mode) | |
---|---|

/* mode = 1: without control ranges; mode = 2: with control ranges; mode = 3: with control ranges and tuning; k—the current discrete moment; x0—the current state (biomass concentration)*/ | |

1 | Initializations; /*space reservation for each particle*/ |

2 | h$\leftarrow $H-k /*h is the prediction horizon*/ |

3 | xmh$\leftarrow $xm(k + 1,…,H); /*copy h elements into the vector xmh */ |

4 | xMh$\leftarrow $xM(k + 1,…,H); /*copy h elements into the vector xMh */ |

5 | Pgbest$\leftarrow $APSOA(k, h, x0, xmh, xMh, mode) /*call the metaheuristic */ |

6 | ocs$\leftarrow $Pgbest |

7 | return ocs |

Run # | J | Ncalls | Run # | J | Ncalls |
---|---|---|---|---|---|

1 | 9.0136 | 960 | 16 | 9.2717 | 672 |

2 | 9.0423 | 665 | 17 | 9.3212 | 727 |

3 | 9.0523 | 951 | 18 | 9.2314 | 689 |

4 | 9.0448 | 845 | 19 | 9.2276 | 758 |

5 | 9.1961 | 940 | 20 | 9.2726 | 731 |

6 | 8.9792 | 962 | 21 | 9.2753 | 848 |

7 | 9.1024 | 663 | 22 | 9.3992 | 717 |

8 | 9.0728 | 959 | 23 | 9.2007 | 738 |

9 | 9.0605 | 841 | 24 | 9.2302 | 839 |

10 | 9.0059 | 941 | 25 | 9.4145 | 654 |

11 | 9.3527 | 773 | 26 | 9.4045 | 745 |

12 | 9.4327 | 662 | 27 | 9.2029 | 759 |

13 | 9.2569 | 747 | 28 | 9.4126 | 753 |

14 | 9.2506 | 785 | 29 | 9.2264 | 798 |

15 | 9.4629 | 812 | 30 | 9.2924 | 790 |

Jmin | Javg | Jmax | Sdev | Jtypical |
---|---|---|---|---|

8.979 | 9.224 | 9.463 | 0.142 | 9.226 |

Run # | ExTime | Run # | ExTime | Run # | ExTime |
---|---|---|---|---|---|

1 | 890.8 | 11 | 771.2 | 21 | 892.6 |

2 | 796.5 | 12 | 858.8 | 22 | 902.5 |

3 | 855.7 | 13 | 880.2 | 23 | 898.4 |

4 | 845.2 | 14 | 947. | 24 | 878.8 |

5 | 924.3 | 15 | 894.4 | 25 | 938.7 |

6 | 861. | 16 | 750.8 | 26 | 798. |

7 | 910.5 | 17 | 1022.9 | 27 | 879.5 |

8 | 879.5 | 18 | 895.7 | 28 | 904.2 |

9 | 901.3 | 19 | 854.9 | 29 | 831. |

10 | 854.8 | 20 | 1009.3 | 30 | 898.1 |

Run # | J | Ncalls | Run # | J | Ncalls |
---|---|---|---|---|---|

1 | 9.1501 | 530 | 16 | 9.1549 | 666 |

2 | 9.1695 | 591 | 17 | 9.1715 | 566 |

3 | 9.1828 | 511 | 18 | 9.1572 | 619 |

4 | 9.1377 | 569 | 19 | 9.1775 | 505 |

5 | 9.1786 | 656 | 20 | 9.1112 | 644 |

6 | 9.1506 | 563 | 21 | 9.1682 | 654 |

7 | 9.2067 | 623 | 22 | 9.1841 | 507 |

8 | 9.155 | 624 | 23 | 9.1643 | 552 |

9 | 9.1398 | 591 | 24 | 9.169 | 540 |

10 | 9.1751 | 595 | 25 | 9.1868 | 611 |

11 | 9.1937 | 596 | 26 | 9.1607 | 576 |

12 | 9.1627 | 511 | 27 | 9.1544 | 520 |

13 | 9.2359 | 713 | 28 | 9.1786 | 633 |

14 | 9.1701 | 602 | 29 | 9.1897 | 601 |

15 | 9.1822 | 563 | 30 | 9.1788 | 635 |

Jmin | Javg | Jmax | Sdev | Jtypical |
---|---|---|---|---|

9.111 | 9.170 | 9.236 | 0.023 | 9.170 |

Run # | J | Ncalls | Run # | J | Ncalls |
---|---|---|---|---|---|

1 | 9.1879 | 672 | 16 | 9.1943 | 633 |

2 | 9.1742 | 538 | 17 | 9.1714 | 581 |

3 | 9.1759 | 717 | 18 | 9.1742 | 569 |

4 | 9.1834 | 639 | 19 | 9.1548 | 605 |

5 | 9.1504 | 652 | 20 | 9.1672 | 621 |

6 | 9.1713 | 630 | 21 | 9.1353 | 537 |

7 | 9.1615 | 523 | 22 | 9.1582 | 611 |

8 | 9.1423 | 508 | 23 | 9.1765 | 670 |

9 | 9.1712 | 577 | 24 | 9.2054 | 554 |

10 | 9.1739 | 608 | 25 | 9.1333 | 705 |

11 | 9.1916 | 613 | 26 | 9.1832 | 547 |

12 | 9.1639 | 571 | 27 | 9.2009 | 643 |

13 | 9.1728 | 774 | 28 | 9.1935 | 585 |

14 | 9.1775 | 501 | 29 | 9.1443 | 592 |

15 | 9.1736 | 673 | 30 | 9.1785 | 599 |

Jmin | Javg | Jmax | Sdev | Jtypical |
---|---|---|---|---|

9.133 | 9.171 | 9.205 | 0.018 | 9.171 |

Run # | ExTime [s] | Run # | ExTime [s] | Run # | ExTime [s] |
---|---|---|---|---|---|

1 | 696.01 | 11 | 656.02 | 21 | 606.25 |

2 | 778.46 | 12 | 681.86 | 22 | 601.70 |

3 | 736.75 | 13 | 828.09 | 23 | 812.89 |

4 | 665.94 | 14 | 745.42 | 24 | 754.47 |

5 | 843.31 | 15 | 823.66 | 25 | 616.89 |

6 | 851.08 | 16 | 779.48 | 26 | 591.39 |

7 | 712.82 | 17 | 670.01 | 27 | 699.41 |

8 | 724.09 | 18 | 742.59 | 28 | 786.92 |

9 | 665.91 | 19 | 750.33 | 29 | 641.37 |

10 | 642.30 | 20 | 713.98 | 30 | 745.00 |

Controller Type | Average Execution Time [s] |
---|---|

Controller without control range adaptation | 878.7 |

Controller with control range adaptation | 737.7 |

Controller with control range adaptation and tuning | 718.8 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Mînzu, V.; Rusu, E.; Arama, I.
Execution Time Decrease for Controllers Based on Adaptive Particle Swarm Optimization. *Inventions* **2023**, *8*, 9.
https://doi.org/10.3390/inventions8010009

**AMA Style**

Mînzu V, Rusu E, Arama I.
Execution Time Decrease for Controllers Based on Adaptive Particle Swarm Optimization. *Inventions*. 2023; 8(1):9.
https://doi.org/10.3390/inventions8010009

**Chicago/Turabian Style**

Mînzu, Viorel, Eugen Rusu, and Iulian Arama.
2023. "Execution Time Decrease for Controllers Based on Adaptive Particle Swarm Optimization" *Inventions* 8, no. 1: 9.
https://doi.org/10.3390/inventions8010009