Next Article in Journal
Epistemic Configurations and Holistic Meaning of Binomial Distribution
Next Article in Special Issue
A Novel Multi-Objective Hybrid Election Algorithm for Higher-Order Random Satisfiability in Discrete Hopfield Neural Network
Previous Article in Journal
The Values of Illustration in the Economic Society of Asturias and Its Reflection in Mathematics Education at the Royal Asturian Institute of Nautical Studies and Mineralogy during the Last Quarter of the 18th Century
Previous Article in Special Issue
Non-Systematic Weighted Satisfiability in Discrete Hopfield Neural Network Using Binary Artificial Bee Colony Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems

by
Khizer Mehmood
1,
Naveed Ishtiaq Chaudhary
2,*,
Zeshan Aslam Khan
1,
Muhammad Asif Zahoor Raja
2,
Khalid Mehmood Cheema
3 and
Ahmad H. Milyani
4
1
Department of Electrical and Computer Engineering, International Islamic University, Islamabad 44000, Pakistan
2
Future Technology Research Center, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan
3
Department of Electronic Engineering, Fatima Jinnah Women University, Rawalpindi 46000, Pakistan
4
Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(10), 1749; https://doi.org/10.3390/math10101749
Submission received: 17 April 2022 / Revised: 12 May 2022 / Accepted: 18 May 2022 / Published: 20 May 2022
(This article belongs to the Special Issue Metaheuristic Algorithms)

Abstract

:
Swarm intelligence-based metaheuristic algorithms have attracted the attention of the research community and have been exploited for effectively solving different optimization problems of engineering, science, and technology. This paper considers the parameter estimation of the control autoregressive (CAR) model by applying a novel swarm intelligence-based optimization algorithm called the Aquila optimizer (AO). The parameter tuning of AO is performed statistically on different generations and population sizes. The performance of the AO is investigated statistically in various noise levels for the parameters with the best tuning. The robustness and reliability of the AO are carefully examined under various scenarios for CAR identification. The experimental results indicate that the AO is accurate, convergent, and robust for parameter estimation of CAR systems. The comparison of the AO heuristics with recent state of the art counterparts through nonparametric statistical tests established the efficacy of the proposed scheme for CAR estimation.

1. Introduction

1.1. Literature Review

In recent years, system identification has gained significant attention in various areas such as signal processing, parameter estimation, multiple-input multiple-output systems, etc. [1,2,3,4]. Parameter estimation refers to the determination of the best fitness values for each parameter using local or global optimization techniques. Parameter estimation consists of three steps: First, construct a mathematical model for a given system such that it replicates exact behaviour under the same conditions. Second, define a fitness function for a given set of parameters using various approximations such as least square, weighted least square, and generalized least square. Third, select an optimization technique for finding the best fitness values through iteration [5].
The research community has shown great interest in parameter estimation of control autoregressive (CAR) systems because of their importance and significance in effectively modelling a variety of engineering problems including power system optimization [6] electricity load prediction [7] battery charge estimation [8] forecasting groundwater flooding [9] and CO2 emission forecasting [10]. Various methods for parameter estimation of control autoregressive (CAR) models are proposed in the literature. Ding et al. [11] decompose a CAR model into two subsystems and derive a two-stage multi-innovation gradient-based iterative algorithm for parameter estimation. Raja et al. [12] use genetic algorithms (GA) for parameter estimation of a nonlinear Hammerstein controlled autoregressive system by minimizing the error function between true and estimated parameters. Mehmood et al. [13] explore the strengths of evolutionary and swarm intelligence for parameter estimation of a controlled autoregressive moving average model by minimizing mean absolute error and other measures for parameter estimation. Tariq et al. [14] apply differential evolution-based algorithms for parameter estimation of Hammerstein systems minimizing the actual and predicted responses of cost function output errors.
Metaheuristic methods have made significant progress in solving optimization problems [15,16,17]. These optimization methods can be classified into four categories. Category one includes evolutionary algorithms, which involve natural biological behaviour such as mutation and crossover operation. Numerous algorithms are proposed in this category, such as genetic algorithms, GAs [18] differential evolution, DE [19] fuzzy evolution [20] maximum likelihood adaptive differential evolution [21] and tree growth algorithms [22]. Category two includes human-based algorithms, which are inspired from human behaviour such as collective decision optimization [23] imperialist competitive algorithms [24] and teaching-learning-based optimization [25]. Category three includes physics-based methods which use physical laws for solution of optimization problems. A few of the methods in this area are gravitational search algorithms [26] thermal exchange optimization [27] and multiverse optimizers [28]. The final category includes swarm intelligence and animal-inspired methods for optimization solution. Significant progress has made in this domain as well, and various methods are proposed such as particle swarm optimization (PSO) [29] artificial bee colonies [30] lion optimization algorithms [31] and whale optimization algorithms [32].
Among swarm intelligence techniques, the Aquila optimizer (AO) [33] has been recently proposed for global solutions and applied in solving various real-world optimization problems. Elaziz et al. [34] proposed an image classification hybrid framework for COVID-19 images by combining deep learning and AO for feature selection and dimensionality reduction for CT and X-ray images. Hussan et al. [35] applied AO in harmonic parameter estimation for an H-bridge inverter by minimizing error with real-time verification on digital signal processing launchpad. Khamees et al. [36] applied AO in Weibull distribution parameter estimation for low error and high correlation coefficients in a wind energy system.

1.2. Research Contribution

In the current study, the swarm intelligence of the Aquila optimizer, AO, is exploited for parameter estimation of control autoregressive (CAR) systems. The AO is evaluated in terms of robustness, correctness, and convergence for different noise levels in the CAR model. The noticeable contributions are as follows:
  • The strength of a swarm intelligence-based Aquila optimizer (AO) heuristic is exploited for solving parameter estimation in a control autoregressive (CAR) model.
  • The convergence, accuracy, and robustness analyses of the AO are conducted for different noise levels considered in the CAR model.
  • Statistical analyses for parameter tuning of the AO as well as for reliability and stability assessment are conducted for different generations and population sizes.

1.3. Paper Organization

The rest of the paper is prepared as follows: the CAR system model is given in Section 2. The AO-based methodology is presented in Section 3. The performance analysis of the CAR model is provided in Section 4. The main conclusions and some future research directions are listed in Section 5.

2. Mathematical Model of CAR Systems

Consider the second-order CAR model presented in (1):
R ( z ) q ( t ) = S ( z ) θ ( t ) + ω ( t ) ,
where θ(t) is the input of the model, q(t) is the output of the model, and ω(t) is zero mean white noise. R(z) and S(z) are polynomials given in (2) and (3):
R ( z ) = 1 + r 1 z 1 + r 2 z 2 + + r n r z n r ,
S ( z ) = s 1 z 1 + s 2 z 2 + + s n s z n s .
Assume that θ ( t ) = 0 , q ( t ) = 0 , ω ( t ) = 0 for t < 0, and n r and n s are known. The parameter vectors are given in (4) and (5):
r = [ r 1 , r 2 , , r n r ] T R n r ,
s = [ s 1 , s 2 , , s n s ] T R n s .
The corresponding information vectors are given in (6) and (7):
ϵ r ( t ) = [ q ( t 1 ) , q ( t 2 ) , , q ( t n r ) ] T R n r ,
ϵ s ( t ) = [ θ ( t 1 ) , θ ( t 2 ) , , θ ( t n s ) ] T R n s .
The model presented in (1) can be rewritten as given in (8) and simplified in (9).
q ( t ) = [ 1 R ( z ) ] q ( t ) + S ( z )   θ ( t ) + ω ( t ) ,
q ( t ) = ϵ r ( t ) r + ϵ s ( t ) s +   ω ( t ) .
The overall information and parameter vectors of the CAR model are given as
ϵ ( t ) = [ ϵ r ( t )   ϵ s ( t ) ]
α = [ r   s ] .
Then, the identification for CAR system becomes:
q ( t ) = ϵ T ( t ) α + ω ( t ) .

3. Methodology

In this section, an AO-based methodology for parameter estimation of a CAR model is presented. The graphical abstract of the proposed methodology for CAR model is presented in Figure 1. It provides the overview of the proposed study, which includes the parameter estimation of a CAR model by applying a swarm intelligence-based Aquila optimizer. The optimum parameters are evaluated on the basis of the square of the difference between estimated and true values along with the number of generations as termination criteria.

3.1. Aquila Optimization (AO) Method

The AO is a swarm intelligence-based method for finding solutions to optimum global problems [33]. It is applied in various domains such as internet of things (IoT) [37], power electronics [35], image processing [38], oil production forecasting [39], Francis turbines [40], hybrid solid oxide fuel cell (SOFC) [41], wind energy [42], and population forecasting [43]. AO is a population-based optimization method inspired by Aquila’s prey-hunting ability. It uses four hunting techniques and has the ability to switch between these techniques. The mathematical model, pseudocode, and flowchart are presented below.

3.1.1. Population Initialization

AO starts with the initialization of the population for candidate solutions (W) as given in (13):
W = [ w 1 , 1 w 1 , D w N p , 1 w N p , D ] .
The population is randomly generated using (14):
W k , l = rand ( UB l LB l ) + LB l ,   k = 1 , 2 N p , l = 1 , 2 D
where N p is the total population size, and D is the number of decision variables.

3.1.2. The Mathematical Model

The mathematical formulation is divided into four steps which are presented below.

Expanded Exploration ( W 1 )

In the first method, W 1 , the Aquila explores the prey area by involving a high soar and a vertical stoop. It is presented in (15):
W 1 ( t + 1 ) = W best ( t ) × ( 1 t T ) + ( W M ( t ) W best ( t ) rand )
where W 1 ( t + 1 ) is the next-iteration solution for W 1 , W best ( t ) is the best solution, ( 1 t T ) is used to control the search space, and W M ( t ) is the mean of the current solution, which is calculated using (16):
W M ( t ) = 1 N p k = 1 N p W k ( t ) ,   l = 1 , 2 D
where N p is the total population size, and D is the number of decision variables.

Narrowed Exploration ( W 2 )

In the second method ( W 2 ), upon finding the prey area, the Aquila circles around the target and uses a method called contour fight with a short glide attack. In W 2 , AO narrowly explores the area for preparation of the attack on the target, which is calculated using (17):
W 2 ( t + 1 ) = W best ( t ) × LEF ( DI ) + W R ( t ) ( y w ) rand
where W 2 ( t + 1 ) is the next-iteration solution for W 2 , DI is the space dimension, W R ( t ) is the random solution from [1 N p ], and LEF ( DI ) is the distribution function calculated in (18):
LEF ( DI ) = d × e × σ | f | 1 δ
where d is constant equals 0.01,   e and f are random numbers between 0–1.   σ is calculated using (19):
σ = ( Ґ ( 1 + δ ) × sin ( π δ 2 ) Ґ ( 1 + δ 2 ) × δ × 2 ( δ 1 2 ) )
where δ is fixed at 1.5. y and w from (17) are calculated as follows.
y = g × cos ( θ ) ,
w = g × sin ( θ )
where g is between 1–20.
g = g 1 + V × DI 1 ,
θ = ε × DI 1 + θ 1 ,
θ 1 = 3 π 2
where V is fixed to 0.00565 and ε is fixed to 0.005.

Expanded Exploitation ( W 3 )

In the third method ( W 3 ), AO exploits the search space by descending vertically to discover prey reaction and to land and attack. It is given in (25):
W 3 ( t + 1 ) = ( W best ( t ) W M ( t ) ) × β rand + ( ( UB LB ) × rand + LB ) × μ
where W 3 ( t + 1 ) is the next-iteration solution for W 3 ,   β and μ are exploitation adjustment factors,   W best ( t ) is the best solution, UB and LB are problem-dependent parameters, and W M ( t ) is the mean of the current solution, which is calculated using (16).

Narrowed Exploitation ( W 4 )

In the fourth method ( W 4 ), AO uses the method of walking and grabbing by getting closer to prey and attacking based on stochastic movement as presented in (26):
W 4 ( t + 1 ) = QYF × W best ( t ) ( O 1 × W ( t ) × rand ) O 2 ×   LEF ( DI ) + rand × O 1
where W 4 ( t + 1 ) is the next-iteration solution for W 4 and QYF is the quality factor, calculated using (27):
QYF = t 2 rand 1 ( 1 T ) 2
O 1 and O 2 indicate the variations of motion, which are calculated using (28) and (29):
O 1 = 2 × rand 1 ,
O 2 = 2 × ( 1 t T )
The flowchart for AO is shown in Figure 2.
The pseudocode of the MLADE is presented in Algorithm 1.
Algorithm 1: Pseudo-code of AO
Initialization:
Initialize the population W and parameters of AO such as σ, β, etc.
WHILE do
Calculate fitness values
Determine the best obtained solution W best .
for   k = 1 :   N p
Update   mean   value   W M ( t ) .
Update   w , y , O 1 , O 2 and LEF ( DI ) .
if  t ( 2 3 ) T
if rand 0.5
Expanded Exploration ( W 1 )
Update solution using (15).
If (Fitness W 1 ( t + 1 ) < Fitness W ( t ) )
W ( t ) = W 1 ( t + 1 )
If (Fitness W 1 ( t + 1 ) < Fitness W best ( t ) )
W best ( t ) = W 1 ( t + 1 )
end
end
else
Narrowed Exploration ( W 2 )
Update solution using (17).
If (Fitness W 2 ( t + 1 ) < Fitness W ( t ) )
W ( t ) = W 2 ( t + 1 )
If (Fitness W 2 ( t + 1 ) < Fitness W best ( t ) )
W best ( t ) = W 2 ( t + 1 )
end
end
end
else if  rand 0.5
Expanded Exploitation ( W 3 )
Update solution using (25).
If (Fitness W 3 ( t + 1 ) < Fitness W ( t ) )
W ( t ) = W 3 ( t + 1 )
If (Fitness W 3 ( t + 1 ) < Fitness W best ( t ) )
W best ( t ) = W 3 ( t + 1 )
end
end
else
Narrowed Exploitation ( W 4 )
Update solution using (26).
If (Fitness W 4 ( t + 1 ) < Fitness W ( t ) )
W ( t ) = W 4 ( t + 1 )
If (Fitness W 4 ( t + 1 ) < Fitness W best ( t ) )
W best ( t ) = W 4 ( t + 1 )
end
end
end
end
end
end
return  W best

4. Performance Analysis

In this section, the performance analysis of AO for the CAR model is presented. The identification of the CAR model is conducted on various noise levels, generations, and population sizes. The algorithm is weighed in terms of accuracy, which is measured by the fitness function presented in (30):
Fitness   Function = mean ( z z ^ ) 2
where z ^ is the estimated response through the proposed evolutionary algorithms and z is the desired response. For the simulation study, we considered the second-order CAR model from [6] as presented in (31) and (32):
R ( z ) = 1 + 1.35 z 1 + 0.75 z 2 ,
S ( z ) = 1.68 z 1 + 2.32 z 2 .

4.1. Parameter Tuning of AO

Learning optimal weights plays a significant role in boosting the performance of the AO method. Hence, the aim is to use the best values for the exploitation adjustment factors ( β and μ ) for learning the optimum weights W 3 using the update rule given in (25). The best values for both parameters ( β and μ ) are achieved through hyper-parameter tuning. Using grid search, various combinations of both β and μ are split into different cases like case 1 to case 9, and the chosen values of β and μ are presented in Table 1. Hyper-parameter tuning is performed for different generations and population sizes in a noise-free environment (zero noise). Each case is executed for three generations, i.e., 1000, 1500, and 2000, and populations, i.e., 30, 40, and 50, whereas the simulations for a combination of one generation and one population are performed for 15 runs to accomplish the average fit, best fit, worst fit, and standard deviation.
Different scenarios (cases) reflecting the tuning of β and μ for the optimal weight update mechanism and the average fit, best fit, worst fit, and standard deviation with different generations and populations are computed and represented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10. Varying β and μ causes the fit to vary with regard to a change in a generation or a population size, and the tables show the fitness trends. It is observed from the results given in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 that the optimal fit for different generations and populations is achieved with case 9, i.e., β = 0.1 and μ = 0.1 .
Apart from the fitness results presented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, the mean fit values achieved with nine ( β and μ ) variations, three generations, and three populations are presented in Table 11. It is observed from the mean fit values in Table 11 that AO obtained the minimum mean fit for case 9, i.e., 440 × 10 6 . Therefore, for optimal AO performance, the remaining simulation results are presented with the best hyper-parameter values, i.e., β = 0.1 and μ = 0.1 .
The fitness plots for case 9 with zero noise for three generations and populations are shown in Figure 3 and Figure 4. The fitness curves with fixed generation size and varying population size are given in Figure 3a–c, and Figure 4a–c include fitness curves for the fixed population size and changing generation size. Figure 3 and Figure 4 show that with the best parameter setting of β = 0.1 and μ = 0.1 , the AO strategy achieves minimum fit for a large number of generations and populations.

4.2. Statistical Convergence Analysis

In this section, the performance of AO algorithm is assessed by introducing three noise levels to the CAR model. Moreover, the fit of AO is estimated through three variations of generation [1000, 1500, 2000] and population [30, 40, 50]. The evaluation metrics used to assess the performance of AO for CAR are average fit, best fit, worst fit, and standard deviation (STD).
The performance in terms of fit variations and standard deviations for the three noise levels, i.e., 0.04, 0.06, and 0.08, is demonstrated in Table 12, Table 13 and Table 14, respectively. It is witnessed from Table 12, Table 13 and Table 14 that the AO fit decreases by increasing population and generation size. It is noticed from Table 12 that the minimum average fit, best fit, and worst fit achieved for noise level = 0.04 are 1.7 × 10 3 , 1.0 × 10 3 , and 3.0 × 10 3 , respectively. Similarly, the three best fit values for noise levels 0.06 and 0.08, given in Table 13 and Table 14, are 2.7 × 10 3 , 2.3 × 10 3 , and   3.2 × 10 3 and 4.5 × 10 3 , 4.1 × 10 3 , and   4.9 × 10 3 , respectively.
The performance of the AO method in terms of best fit for three noise levels, i.e., 0.04, 0.06, and 0.08, is evaluated for the three variations in a generation, 1000, 1500, and 2000, and population size, 30, 40, and 50. Figure 5 shows the fitness plots. The fitness curves in Figure 5a–c represent the best fit of the AO algorithm for noise variance = 0.04. In contrast, Figure 5d–e show the best fit curves for noise variance = 0.06. Likewise, the best fit plots for noise variance = 0.08 are given in Figure 5g–i. Figure 5a–i shows that the AO fit for the three noise levels, i.e., 0.04, 0.06, and 0.08, decreases significantly with increasing generation and population size as well. However, better fit results are achieved for smaller values of noise with a greater number of generations and populations.
To confirm the natural behaviour of the AO strategy for different noise values, the performance of the AO method is also verified by fixing the population size (30, 40, or 50) and changing the generation size (1000, 1500, or 2000) for three values of noise variance (0.04, 0.06, and 0.08), and the fitness-based learning curves are presented in Figure 6. Figure 6a–c represent the AO fit with population = 30, and the fitness plots for population = 40 are given in Figure 6d,e. Figure 6g–i denotes the fitness plots for population = 50. It is realized from the fitness curves given in Figure 6a–i that for a fixed population and generation size, the AO fit for low levels of noise, i.e., 0.04 and 0.06, is quite lower than the fit for high noise, i.e., 0.08. Nevertheless, AO accomplishes the minimum fit for the smallest noise level, i.e., 0.04, for fixed population size. Therefore, it is confirmed from the curves in Figure 6 that the performance of AO degrades noticeably for higher noise values.

4.3. Results Comparison with other Heuristics

To further investigate the exploration and exploitation phase of the AO, it is compared with the arithmetic optimization algorithm (AOA) [44], the sine cosine algorithm (SCA) [45], and the reptile search algorithm (RSA) [46] for 15 independent runs with 3 variations of generation (1000, 1500, 2000) and population (30, 40, 50). Table 15, Table 16 and Table 17 shows the performance of all algorithms in terms of estimated weights and best fit for 0.04, 0.06, and 0.08 noise levels. It is seen that the algorithm gives better results at low noise, i.e., 0.04, than at high noise. Moreover, for low noise, the estimated weights are closer to the true values with minimum fit.
Table 18, Table 19 and Table 20 show the performance of the AO, AOA, RSA, and SCA algorithms in terms of average fit for all noise variances. It is seen that for all noise variances, the AO algorithm gives better results as compared with RSA, SCA, and AOA for generations and populations.
The statistical analysis of AO, SCA, AOA, and RSA for multiple runs, noise variances, and population sizes and constant generation size are shown in Figure 7. It is witnessed that for all noise variances, the AO achieves worse fit as compared with RSA, AOA, and SCA. It is also observed that by increasing the noise level degrades the performance of all algorithms. However, AO achieves optimal fit in all scenarios.
To further investigate the performance of AO vs. RSA, AO vs. AOA, and AO vs. SCA, a nonparametric Mann-Whitney U test [47] is performed on average fit values for all algorithms on noise variances 0.04, 0.06, and 0.08 with generations 1000, 1500, and 2000 and populations 30, 40, or 50. The Mann-Whitney U test is a parametric equivalent of two sample t test. The significance level is 0.01 and one tailed hypothesis is used. The computed z-score is −6.29719 and p-value is less than 0.00001. Moreover, the result is significant at p < 0.01 as presented in Figure 8, Figure 9 and Figure 10.
The results of detailed simulations and the statistics indicate that the AO based swarming optimization heuristics effectively estimates the parameters of the CAR systems. However, the real time implementation of the swarm optimization algorithms for practical system identification problems require further investigation.

5. Conclusions

Following are the conclusions drawn from the extensive simulation results presented in the last section:
The strength of swarm intelligence of the Aquila optimizer, AO, is effectively exploited for parameter estimation of control autoregressive, CAR, systems. Performance of the AO is enhanced by escalating the population and generation at the expense of computational cost. While, the optimal fitness for different generations and populations is achieved for β = 0.1 and μ = 0.1 exploitation adjustable factors. The robustness and accuracy of the AO decreases by varying the noise level. The comparative study of the AO with other heuristics based on AOA, SCA and RSA established the efficacy of the proposed scheme. The statistical analysis through Mann-Whitney U test endorsed the reliability of the AO scheme for CAR system identification The current study expands the application domain of swarm intelligence based optimizers by exploiting the strength of AO approach for system identification. However, future work may consider applying the proposed methodology of the for solving complex problems [48,49,50,51,52,53,54].

Author Contributions

Conceptualization, K.M. and N.I.C.; methodology, K.M.; software, K.M.; validation, N.I.C., M.A.Z.R. and Z.A.K.; formal analysis, Z.A.K.; writing—original draft preparation, K.M.; writing—review and editing, N.I.C., Z.A.K. and M.A.Z.R.; project administration, K.M.C. and A.H.M.; funding acquisition, K.M.C. and A.H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mehmood, A.; Raja, M.A.Z.; Shi, P.; Chaudhary, N.I. Weighted differential evolution-based heuristic computing for identification of Hammerstein systems in electrically stimulated muscle modeling. Soft Comput. 2022, 1–17. [Google Scholar] [CrossRef]
  2. Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Mehmood, A.; Shah, S.M. Design of fractional hierarchical gradient descent algorithm for parameter estimation of nonlinear control autoregressive systems. Chaos Solitons Fractals 2022, 157, 111913. [Google Scholar] [CrossRef]
  3. Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Cheema, K.M.; Milyani, A.H. Hierarchical Quasi-Fractional Gradient Descent Method for Parameter Estimation of Nonlinear ARX Systems Using Key Term Separation Principle. Mathematics 2021, 9, 3302. [Google Scholar] [CrossRef]
  4. Shen, Q.; Ding, F. Least squares identification for Hammerstein multi-input multi-output systems based on the key-term separation technique. Circuits Syst. Signal Process. 2016, 35, 3745–3758. [Google Scholar] [CrossRef]
  5. Begum, N.; Dadashpour, M.; Kleppe, J. Subchapter 1.6—A case study of reservoir parameter estimation in Norne oil field, Norway by using Ensemble Kalman Filter (EnKF). In Innovative Exploration Methods for Minerals, Oil, Gas, and Groundwater for Sustainable Development; Moitra, A.K., Kayal, J.R., Mukerji, B., Bhattacharya, J., Eds.; Elsevier: Amsterdam, The Netherlands, 2022; pp. 61–78. [Google Scholar] [CrossRef]
  6. Hwang, J.K.; Shin, J. Identification of interarea modes from ambient data of phasor measurement units using an autoregressive exogenous model. IEEE Access 2021, 9, 45695–45705. [Google Scholar] [CrossRef]
  7. Javed, U.; Ijaz, K.; Jawad, M.; Ansari, E.A.; Shabbir, N.; Kütt, L.; Husev, O. Exploratory Data Analysis Based Short-Term Electrical Load Forecasting: A Comprehensive Analysis. Energies 2021, 14, 5510. [Google Scholar] [CrossRef]
  8. Dong, G.; Chen, Z.; Wei, J. Sequential monte carlo filter for state-of-charge estimation of lithium-ion batteries based on auto regressive exogenous model. IEEE Trans. Ind. Electron. 2019, 66, 8533–8544. [Google Scholar] [CrossRef]
  9. Basu, B.; Morrissey, P.; Gill, L.W. Application of nonlinear time series and machine learning algorithms for forecasting groundwater flooding in a lowland karst area. Water Resour. Res. 2022, 58, e2021WR029576. [Google Scholar] [CrossRef]
  10. Shabani, E.; Ghorbani, M.A.; Inyurt, S. The power of the GP-ARX model in CO2 emission forecasting. In Risk, Reliability and Sustainable Remediation in the Field of Civil and Environmental Engineering; Elsevier: Amsterdam, The Netherlands, 2022; pp. 79–91. [Google Scholar] [CrossRef]
  11. Ding, F.; Lv, L.; Pan, J.; Wan, X.; Jin, X.-B. Two-stage gradient-based iterative estimation methods for controlled autoregressive systems using the measurement data. Int. J. Control Autom. Syst. 2020, 18, 886–896. [Google Scholar] [CrossRef]
  12. Raja, M.A.Z.; Shah, A.A.; Mehmood, A.; Chaudhary, N.I.; Aslam, M.S. Bio-inspired computational heuristics for parameter estimation of nonlinear Hammerstein controlled autoregressive system. Neural Comput. Appl. 2018, 29, 1455–1474. [Google Scholar] [CrossRef]
  13. Mehmood, A.; Zameer, A.; Raja MA, Z.; Bibi, R.; Chaudhary, N.I.; Aslam, M.S. Nature-inspired heuristic paradigms for parameter estimation of control autoregressive moving average systems. Neural Comput. Appl. 2019, 31, 5819–5842. [Google Scholar] [CrossRef]
  14. Tariq, H.B. Maximum-Likelihood-Based Adaptive and Intelligent Computing for Nonlinear System Identification. Mathematics 2021, 9, 3199. [Google Scholar] [CrossRef]
  15. Dong, J.; Wang, Z.; Mo, J. A phase angle-modulated bat algorithm with application to antenna topology optimization. Appl. Sci. 2021, 11, 2243. [Google Scholar] [CrossRef]
  16. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-qaness, M.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  17. Wang, S.; Jia, H.; Abualigah, L.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021, 9, 1551. [Google Scholar] [CrossRef]
  18. Altaf, F.; Chang, C.L.; Chaudhary, N.I.; Raja, M.A.Z.; Cheema, K.M.; Shu, C.M.; Milyani, A.H. Adaptive Evolutionary Computation for Nonlinear Hammerstein Control Autoregressive Systems with Key Term Separation Principle. Mathematics 2022, 10, 1001. [Google Scholar] [CrossRef]
  19. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  20. Malik, M.F.; Chang, C.L.; Aslam, M.S.; Chaudhary, N.I.; Raja, M.A.Z. Fuzzy-Evolution Computing Paradigm for Fractional Hammerstein Control Autoregressive Systems. Int. J. Fuzzy Syst. 2022, 1–29. [Google Scholar] [CrossRef]
  21. Cui, T.; Xu, L.; Ding, F.; Alsaedi, A.; Hayat, T. Maximum likelihood-based adaptive differential evolution identification algorithm for multivariable systems in the state-space form. Int. J. Adapt. Control Signal Process. 2020, 34, 1658–1676. [Google Scholar] [CrossRef]
  22. Cheraghalipour, A.; Hajiaghaei-Keshteli, M.; Paydar, M.M. Tree Growth Algorithm (TGA): A novel approach for solving optimization problems. Eng. Appl. Artif. Intell. 2018, 72, 393–414. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Wang, R.; Yang, J.; Ding, K.; Li, Y.; Hu, J. Collective decision optimization algorithm: A new heuristic optimization method. Neurocomputing 2017, 221, 123–137. [Google Scholar] [CrossRef]
  24. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar] [CrossRef]
  25. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  26. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  27. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  29. Malik, N.A.; Chang, C.L.; Chaudhary, N.I.; Raja, M.A.Z.; Cheema, K.M.; Shu, C.M.; Alshamrani, S.S. Knacks of Fractional Order Swarming Intelligence for Parameter Estimation of Harmonics in Electrical Systems. Mathematics 2022, 10, 1570. [Google Scholar] [CrossRef]
  30. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  31. Yazdani, M.; Jolai, F. Lion optimization algorithm (LOA): A nature-inspired metaheuristic algorithm. J. Comput. Des. Eng. 2016, 3, 24–36. [Google Scholar] [CrossRef] [Green Version]
  32. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  33. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  34. Elaziz, M.A.; Dahou, A.; Alsaleh, N.A.; Elsheikh, A.H.; Saba, A.I.; Ahmadein, M. Boosting COVID-19 Image Classification Using MobileNetV3 and Aquila Optimizer Algorithm. Entropy 2021, 23, 1383. [Google Scholar] [CrossRef] [PubMed]
  35. Hussan, M.R.; Sarwar, M.I.; Sarwar, A.; Tariq, M.; Ahmad, S.; Shah Noor Mohamed, A.; Khan, I.A.; Ali Khan, M.M. Aquila Optimization Based Harmonic Elimination in a Modified H-Bridge Inverter. Sustainability 2022, 14, 929. [Google Scholar] [CrossRef]
  36. Khamees, K.; Abdelaziz, A.Y.; Eskaros, M.R.; Alhelou, H.H.; Attia, M.A. Stochastic Modeling for Wind Energy and Multi-Objective Optimal Power Flow by Novel Meta-Heuristic Method. IEEE Access 2021, 9, 158353–158366. [Google Scholar] [CrossRef]
  37. Fatani, A.; Dahou, A.; Al-Qaness, M.A.; Lu, S.; Elaziz, M.A. Advanced Feature Extraction and Selection Approach Using Deep Learning and Aquila Optimizer for IoT Intrusion Detection System. Sensors 2022, 22, 140. [Google Scholar] [CrossRef]
  38. Rajinikanth, V.; Aslam, S.M.; Kadry, S.; Thinnukool, O. Semi/Fully-Automated Segmentation of Gastric-Polyp Using Aquila-Optimization-Algorithm Enhanced Images. CMC-Comput. Mater. Contin. 2022, 70, 4087–4105. [Google Scholar] [CrossRef]
  39. AlRassas, A.M.; Al-qaness, M.A.; Ewees, A.A.; Ren, S.; Abd Elaziz, M.; Damaševičius, R.; Krilavičius, T. Optimized ANFIS model using Aquila Optimizer for oil production forecasting. Processes 2021, 9, 1194. [Google Scholar] [CrossRef]
  40. Vashishtha, G.; Kumar, R. Autocorrelation energy and aquila optimizer for MED filtering of sound signal to detect bearing defect in Francis turbine. Meas. Sci. Technol. 2021, 33, 15006. [Google Scholar] [CrossRef]
  41. Wang, S.; Ma, J.; Li, W.; Khayatnezhad, M.; Rouyendegh, B.D. An optimal configuration for hybrid SOFC, gas turbine, and Proton Exchange Membrane Electrolyzer using a developed Aquila Optimizer. Int. J. Hydrogen Energy 2022, 47, 8943–8955. [Google Scholar] [CrossRef]
  42. Khamees, K.; Abdelaziz, A.Y.; Eskaros, M.R.; El-Shahat, A.; Attia, M.A. Optimal Power Flow Solution of Wind-Integrated Power System Using Novel Metaheuristic Method. Energies 2021, 14, 6117. [Google Scholar] [CrossRef]
  43. Ma, L.; Li, J.; Zhao, Y. Population Forecast of China’s Rural Community Based on CFANGBM and Improved Aquila Optimizer Algorithm. Fractal Fract. 2021, 5, 190. [Google Scholar] [CrossRef]
  44. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  45. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  46. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  47. MacFarland, T.W.; Yates, J.M. Mann–whitney u test. In Introduction to Nonparametric Statistics for the Biological Sciences Using R; Springer: Cham, Switzerland, 2016; pp. 103–132. [Google Scholar] [CrossRef]
  48. Stodola, P. Using metaheuristics on the multi-depot vehicle routing problem with modified optimization criterion. Algorithms 2018, 11, 74. [Google Scholar] [CrossRef] [Green Version]
  49. Stodola, P.; Michenka, K.; Nohel, J.; Rybanský, M. Hybrid algorithm based on ant colony optimization and simulated annealing applied to the dynamic traveling salesman problem. Entropy 2020, 22, 884. [Google Scholar] [CrossRef]
  50. Stodola, P.; Mazal, J. Model of optimal cooperative reconnaissance and its solution using metaheuristic methods. Def. Sci. J. 2017, 67, 529. [Google Scholar] [CrossRef] [Green Version]
  51. Stodola, P.; Mazal, J. Applying the ant colony optimisation algorithm to the capacitated multi-depot vehicle routing problem. Int. J. Bio-Inspired Comput. 2016, 8, 228–233. [Google Scholar] [CrossRef]
  52. Stodola, P.; Mazal, J. Tactical models based on a multi-depot vehicle routing problem using the ant colony optimization algorithm. Int. J. Math. Models Methods Appl. Sci. 2015, 9, 330–337. Available online: https://www.naun.org/main/NAUN/ijmmas/2015/a782001-387.pdf (accessed on 10 February 2022).
  53. Chaudhary, N.I.; Raja, M.A.Z.; He, Y.; Khan, Z.A.; Machado, J.T. Design of multi innovation fractional LMS algorithm for parameter estimation of input nonlinear control autoregressive systems. Appl. Math. Model. 2021, 93, 412–425. [Google Scholar] [CrossRef]
  54. Chaudhary, N.I.; Latif, R.; Raja, M.A.Z.; Machado, J.T. An innovative fractional order LMS algorithm for power signal parameter estimation. Appl. Math. Model. 2020, 83, 703–718. [Google Scholar] [CrossRef]
Figure 1. Graphical abstract of the proposed study.
Figure 1. Graphical abstract of the proposed study.
Mathematics 10 01749 g001
Figure 2. AO Flowchart.
Figure 2. AO Flowchart.
Mathematics 10 01749 g002
Figure 3. Fitness plots for AO with respect to population sizes. (a) T = 1000. (b) T = 1500. (c) T = 2000.
Figure 3. Fitness plots for AO with respect to population sizes. (a) T = 1000. (b) T = 1500. (c) T = 2000.
Mathematics 10 01749 g003
Figure 4. Fitness plots for AO with respect to generations. (a) Np = 30. (b) Np = 40. (c) Np = 50.
Figure 4. Fitness plots for AO with respect to generations. (a) Np = 30. (b) Np = 40. (c) Np = 50.
Mathematics 10 01749 g004
Figure 5. Fitness plots for AO w.r.t population sizes. (a) T = 1000. (b) T = 1500. (c) T = 2000. (d) T = 1000. (e) T = 1500. (f) T = 2000. (g) T = 1000. (h) T = 1500. (i) T = 2000.
Figure 5. Fitness plots for AO w.r.t population sizes. (a) T = 1000. (b) T = 1500. (c) T = 2000. (d) T = 1000. (e) T = 1500. (f) T = 2000. (g) T = 1000. (h) T = 1500. (i) T = 2000.
Mathematics 10 01749 g005aMathematics 10 01749 g005b
Figure 6. Fitness plots for AO with respect to population sizes. (a) Np = 30. (b) Np = 30. (c) Np = 30. (d) Np = 40. (e) Np = 40. (f) Np = 40. (g) Np = 50. (h) Np = 50. (i) Np = 50.
Figure 6. Fitness plots for AO with respect to population sizes. (a) Np = 30. (b) Np = 30. (c) Np = 30. (d) Np = 40. (e) Np = 40. (f) Np = 40. (g) Np = 50. (h) Np = 50. (i) Np = 50.
Mathematics 10 01749 g006aMathematics 10 01749 g006b
Figure 7. Statistical analyses plots of AO, AOA, SCA, and RSA for Np = 50, T = 2000. (a) Noise level = 0.04. (b) Noise level = 0.06. (c) Noise level = 0.08.
Figure 7. Statistical analyses plots of AO, AOA, SCA, and RSA for Np = 50, T = 2000. (a) Noise level = 0.04. (b) Noise level = 0.06. (c) Noise level = 0.08.
Mathematics 10 01749 g007
Figure 8. Mann-Whitney U test between AO and AOA where * p < 0.01.
Figure 8. Mann-Whitney U test between AO and AOA where * p < 0.01.
Mathematics 10 01749 g008
Figure 9. Mann-Whitney U test between AO and SCA where * p < 0.01.
Figure 9. Mann-Whitney U test between AO and SCA where * p < 0.01.
Mathematics 10 01749 g009
Figure 10. Mann-Whitney U test between AO and RSA where * p < 0.01.
Figure 10. Mann-Whitney U test between AO and RSA where * p < 0.01.
Mathematics 10 01749 g010
Table 1. Parameter-tuning cases for AO analysis.
Table 1. Parameter-tuning cases for AO analysis.
Case No.β Valueμ Value
10.90.9
20.90.5
30.90.1
40.50.9
50.50.5
60.50.1
70.10.9
80.10.5
90.10.1
Table 2. AO parameter analysis for case 1.
Table 2. AO parameter analysis for case 1.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 1.2 × 10 3 1.2 × 10 4 3.3 × 10 3 8.7 × 10 4
40 1.3 × 10 3 1.8 × 10 4 4.0 × 10 3 1.0 × 10 3
50 6.9 × 10 4 1.4 × 10 4 2.4 × 10 3 6.1 × 10 4
150030 1.5 × 10 3 4.1 × 10 4 4.1 × 10 3 1.2 × 10 3
40 8.8 × 10 4 6.2 × 10 5 2.2 × 10 3 6.7 × 10 4
50 7.0 × 10 4 9.0 × 10 5 1.6 × 10 3 4.7 × 10 4
200030 6.4 × 10 4 1.5 × 10 4 2.0 × 10 3 4.9 × 10 4
40 7.7 × 10 4 1.7 × 10 4 2.5 × 10 3 6.4 × 10 4
50 4.5 × 10 4 1.2 × 10 4 1.1 × 10 3 3.1 × 10 4
Table 3. AO parameter analysis for case 2.
Table 3. AO parameter analysis for case 2.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 4.4 × 10 3 3.9 × 10 4 12 × 10 3 3.5 × 10 3
40 2.6 × 10 3 4.3 × 10 5 16 × 10 3 3.9 × 10 3
50 2.8 × 10 3 8.9 × 10 5 11 × 10 3 2.9 × 10 3
150030 2.9 × 10 3 1.3 × 10 4 9.0 × 10 3 3.0 × 10 3
40 1.1 × 10 3 1.9 × 10 5 2.7 × 10 3 8.3 × 10 4
50 1.1 × 10 3 1.2 × 10 4 3.1 × 10 3 7.9 × 10 4
200030 1.0 × 10 3 1.9 × 10 4 2.8 × 10 3 7.4 × 10 4
40 1.1 × 10 3 3.3 × 10 4 3.1 × 10 3 7.3 × 10 4
50 1.6 × 10 3 2.2 × 10 4 6.5 × 10 3 1.7 × 10 3
Table 4. AO parameter analysis for case 3.
Table 4. AO parameter analysis for case 3.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 1.9 × 10 3 1.4 × 10 4 5.3 × 10 3 1.5 × 10 3
40 1.4 × 10 3 1.5 × 10 4 3.9 × 10 3 1.2 × 10 3
50 6.9 × 10 4 4.1 × 10 5 1.8 × 10 3 4.9 × 10 4
150030 1.0 × 10 3 1.0 × 10 4 5.2 × 10 3 1.3 × 10 3
40 8.5 × 10 4 1.7 × 10 4 2.5 × 10 3 6.7 × 10 4
50 6.4 × 10 4 1.1 × 10 4 1.8 × 10 3 5.2 × 10 4
200030 8.5 × 10 4 1.2 × 10 4 2.2 × 10 3 5.6 × 10 4
40 5.2 × 10 4 3.0 × 10 5 1.0 × 10 3 3.3 × 10 4
50 5.6 × 10 4 8.2 × 10 5 2.4 × 10 3 5.7 × 10 4
Table 5. AO parameter analysis for case 4.
Table 5. AO parameter analysis for case 4.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 4.4 × 10 3 3.9 × 10 4 12 × 10 3 3.5 × 10 3
40 2.6 × 10 3 4.3 × 10 4 16 × 10 3 3.9 × 10 3
50 2.8 × 10 3 8.9 × 10 5 11.5 × 10 3 2.9 × 10 3
150030 2.9 × 10 3 1.3 × 10 4 9.0 × 10 3 3.0 × 10 3
40 1.1 × 10 3 1.9 × 10 5 2.7 × 10 3 8.3 × 10 4
50 1.1 × 10 3 1.3 × 10 4 3.1 × 10 3 7.9 × 10 4
200030 1.0 × 10 3 1.9 × 10 4 2.8 × 10 3 7.4 × 10 4
40 1.1 × 10 4 3.3 × 10 4 3.1 × 10 3 7.3 × 10 4
50 1.6 × 10 3 2.2 × 10 4 6.5 × 10 3 1.7 × 10 3
Table 6. AO parameter analysis for case 5.
Table 6. AO parameter analysis for case 5.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 1.1 × 10 3 1.7 × 10 4 3.4 × 10 3 8.9 × 10 4
40 5.4 × 10 4 7.2 × 10 5 1.4 × 10 3 3.6 × 10 4
50 5.7 × 10 4 2.3 × 10 5 1.2 × 10 3 2.9 × 10 4
150030 6.8 × 10 4 6.7 × 10 5 1.7 × 10 3 5.5 × 10 4
40 2.2 × 10 4 7.4 × 10 6 6.6 × 10 4 2.1 × 10 4
50 3.7 × 10 4 1.3 × 10 4 1.2 × 10 3 3.3 × 10 4
200030 3.2 × 10 4 3.9 × 10 5 9.6 × 10 4 3.1 × 10 4
40 2.6 × 10 4 3.1 × 10 5 6.1 × 10 4 1.8 × 10 4
50 2.1 × 10 4 2.1 × 10 5 5.6 × 10 4 1.5 × 10 4
Table 7. AO parameter analysis for case 6.
Table 7. AO parameter analysis for case 6.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 1.2 × 10 3 1.2 × 10 4 3.3 × 10 3 8.7 × 10 4
40 1.3 × 10 3 1.8 × 10 4 4.0 × 10 3 1.0 × 10 3
50 6.9 × 10 4 1.4 × 10 4 2.4 × 10 3 6.1 × 10 4
150030 1.5 × 10 3 4.1 × 10 4 4.1 × 10 3 1.2 × 10 3
40 8.8 × 10 4 6.2 × 10 5 2.2 × 10 3 6.7 × 10 4
50 7.0 × 10 4 9.0 × 10 5 1.6 × 10 3 4.7 × 10 4
200030 6.4 × 10 4 1.5 × 10 4 2.0 × 10 3 4.9 × 10 4
40 7.7 × 10 4 1.7 × 10 4 2.5 × 10 3 6.4 × 10 4
50 4.5 × 10 4 1.2 × 10 4 1.1 × 10 3 3.1 × 10 4
Table 8. AO parameter analysis for case 7.
Table 8. AO parameter analysis for case 7.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 1.9 × 10 3 1.7 × 10 4 5.3 × 10 3 1.5 × 10 3
40 1.4 × 10 3 7.2 × 10 5 3.9 × 10 3 1.2 × 10 3
50 6.9 × 10 4 2.3 × 10 5 1.8 × 10 3 4.9 × 10 4
150030 1.0 × 10 3 6.7 × 10 5 5.2 × 10 3 1.3 × 10 3
40 8.5 × 10 4 7.4 × 10 6 2.5 × 10 3 6.7 × 10 4
50 6.4 × 10 4 1.3 × 10 4 1.8 × 10 3 5.2 × 10 4
200030 8.5 × 10 4 3.9 × 10 5 2.2 × 10 3 5.6 × 10 4
40 5.2 × 10 4 3.1 × 10 5 1.0 × 10 3 3.3 × 10 4
50 5.6 × 10 4 2.1 × 10 5 2.4 × 10 3 5.7 × 10 4
Table 9. AO parameter analysis for case 8.
Table 9. AO parameter analysis for case 8.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 7.3 × 10 4 2.6 × 10 5 3.0 × 10 3 9.0 × 10 4
40 9.1 × 10 4 2.4 × 10 4 2.5 × 10 3 7.4 × 10 4
50 6.9 × 10 4 4.3 × 10 5 1.7 × 10 3 5.3 × 10 4
150030 5.5 × 10 4 3.0 × 10 5 1.8 × 10 3 4.7 × 10 4
40 5.3 × 10 4 1.0 × 10 4 1.1 × 10 3 2.5 × 10 4
50 4.8 × 10 4 7.8 × 10 5 1.6 × 10 3 4.6 × 10 4
200030 4.2 × 10 4 8.9 × 10 5 1.5 × 10 3 3.5 × 10 4
40 5.1 × 10 4 6.8 × 10 5 2.3 × 10 3 5.5 × 10 4
50 3.7 × 10 4 4.8 × 10 5 8.6 × 10 4 2.1 × 10 4
Table 10. AO parameter analysis for case 9.
Table 10. AO parameter analysis for case 9.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 6.9 × 10 4 1.1 × 10 4 2.6 × 10 3 6.2 × 10 4
40 4.7 × 10 4 4.3 × 10 5 1.5 × 10 3 3.8 × 10 4
50 5.2 × 10 4 3.5 × 10 5 2.2 × 10 3 5.7 × 10 4
150030 6.7 × 10 4 1.4 × 10 4 2.3 × 10 3 5.7 × 10 4
40 4.6 × 10 4 2.8 × 10 5 1.4 × 10 3 4.3 × 10 4
50 3.0 × 10 4 7.7 × 10 5 7.1 × 10 3 2.1 × 10 4
200030 3.3 × 10 4 3.6 × 10 5 9.6 × 10 4 2.3 × 10 4
40 1.9 × 10 4 3.9 × 10 5 5.0 × 10 4 1.4 × 10 4
50 2.8 × 10 4 4.5 × 10 5 1.1 × 10 3 2.5 × 10 4
Table 11. AO parameter tuning mean value analysis.
Table 11. AO parameter tuning mean value analysis.
CasesMean Fitness Value
Case 1 908 × 10 6
Case 2 2.06 × 10 3
Case 3 938 × 10 6
Case 4 2.06 × 10 3
Case 5 479 × 10 6
Case 6 908 × 10 6
Case 7 938 × 10 6
Case 8 581 × 10 6
Case 9 440 × 10 6
Table 12. AO analysis with respect to generation and population sizes at 0.04 noise variance.
Table 12. AO analysis with respect to generation and population sizes at 0.04 noise variance.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 3.8 × 10 3 1.0 × 10 3 9.3 × 10 3 2.7 × 10 3
40 2.0 × 10 3 1.1 × 10 3 3.7 × 10 3 7.4 × 10 4
50 1.8 × 10 3 1.1 × 10 3 4.0 × 10 3 8.9 × 10 4
150030 2.0 × 10 3 1.1 × 10 3 5.3 × 10 3 1.1 × 10 3
40 1.7 × 10 3 1.1 × 10 3 3.0 × 10 3 6.0 × 10 4
50 1.7 × 10 3 1.2 × 10 3 2.4 × 10 3 4.5 × 10 4
200030 1.9 × 10 3 1.1 × 10 3 3.2 × 10 3 4.8 × 10 4
40 1.7 × 10 3 1.0 × 10 3 3.7 × 10 3 8.4 × 10 4
50 1.9 × 10 3 1.1 × 10 3 5.6 × 10 3 1.3 × 10 3
Table 13. AO analysis with respect to generation and population sizes at 0.06 noise variance.
Table 13. AO analysis with respect to generation and population sizes at 0.06 noise variance.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 4.6 × 10 3 2.3 × 10 3 10 × 10 3 2.4 × 10 3
40 3.3 × 10 3 2.3 × 10 3 4.4 × 10 3 6.9 × 10 4
50 3.3 × 10 3 2.3 × 10 3 5.9 × 10 3 1.0 × 10 3
150030 3.6 × 10 3 2.4 × 10 3 6.5 × 10 3 1.3 × 10 3
40 3.4 × 10 3 2.5 × 10 3 8.1 × 10 3 1.4 × 10 3
50 2.9 × 10 3 2.3 × 10 3 4.1 × 10 3 6.0 × 10 4
200030 3.1 × 10 3 2.3 × 10 3 7.1 × 10 3 1.2 × 10 3
40 2.8 × 10 3 2.3 × 10 3 3.8 × 10 3 4.6 × 10 4
50 2.7 × 10 3 2.3 × 10 3 3.2 × 10 3 2.7 × 10 4
Table 14. AO analysis with respect to generation and population sizes at 0.08 noise variance.
Table 14. AO analysis with respect to generation and population sizes at 0.08 noise variance.
Generations (T)Population (Np)Average FitnessBest FitnessWorst FitnessSTD
100030 6.4 × 10 3 4.1 × 10 3 13.1 × 10 3 2.5 × 10 3
40 5.0 × 10 3 4.1 × 10 3 7.0 × 10 3 7.8 × 10 4
50 5.0 × 10 3 4.1 × 10 3 7.1 × 10 3 8.6 × 10 4
150030 5.1 × 10 3 4.2 × 10 3 8.2 × 10 3 1.0 × 10 3
40 4.7 × 10 3 4.1 × 10 3 6.3 × 10 3 6.4 × 10 4
50 4.8 × 10 3 4.0 × 10 3 6.8 × 10 3 7.1 × 10 4
200030 4.6 × 10 3 4.1 × 10 3 4.9 × 10 3 2.2 × 10 4
40 4.6 × 10 3 4.1 × 10 3 5.6 × 10 3 3.9 × 10 4
50 4.5 × 10 3 4.1 × 10 3 5.0 × 10 3 2.5 × 10 4
Table 15. Comparison of AO with AOA, SCA, and RSA against the true values for the CAR model at 0.04 noise variance.
Table 15. Comparison of AO with AOA, SCA, and RSA against the true values for the CAR model at 0.04 noise variance.
AlgorithmGenerations (T)Population (Np)Design ParametersBest Fit
r1r2s1s2
AO1000301.3620.7671.6922.338 1.0 × 10 3
401.3570.7651.7062.308 1.1 × 10 3
501.3530.7641.7122.298 1.1 × 10 3
1500301.3610.7591.7022.319 1.1 × 10 3
401.3560.7681.7042.316 1.1 × 10 3
501.3570.7591.7032.302 1.2 × 10 3
2000301.3660.7711.7232.321 1.1 × 10 3
401.3590.7601.6862.329 1.0 × 10 3
501.3480.7571.7172.284 1.1 × 10 3
RSA1000301.1900.5991.7401.855 5.3 × 10 2
401.3760.7691.6192.418 3.1 × 10 3
501.2010.5691.3712.042 1.0 × 10 1
1500301.2810.7341.6632.178 1.9 × 10 2
401.2940.6491.3842.381 2.5 × 10 2
501.1270.5791.7041.767 8.3 × 10 2
2000301.2720.7041.1742.648 5.4 × 10 2
401.3810.7691.5262.510 7.5 × 10 3
501.2270.7081.7192.051 3.3 × 10 2
SCA1000301.3410.7151.6462.278 2.9 × 10 3
401.3540.7741.7252.332 3.4 × 10 3
501.3940.7661.6782.397 2.6 × 10 3
1500301.3760.7591.6622.443 4.7 × 10 3
401.3640.7791.5722.497 2.7 × 10 3
501.3470.7421.6262.355 9.5 × 10 4
2000301.3500.7531.6942.334 2.5 × 10 3
401.3260.7821.6002.389 4.1 × 10 3
501.3120.7041.6912.205 2.8 × 10 3
AOA1000301.3460.8661.4942.613 1.9 × 10 2
401.4490.8811.4652.833 1.7 × 10 2
501.3530.7691.5552.519 8.3 × 10 3
1500301.3940.7621.7912.250 6.6 × 10 3
401.3450.7261.6492.358 6.2 × 10 3
501.3850.8001.6132.512 3.5 × 10 3
2000301.3310.6771.4552.409 1.5 × 10 2
401.3830.7241.5692.433 7.8 × 10 3
501.3460.7421.7632.166 8.6 × 10 3
True Values1.3500.7501.6802.3200
Table 16. Comparison of AO with AOA, SCA, and RSA against the true values for the CAR model at 0.06 noise variance.
Table 16. Comparison of AO with AOA, SCA, and RSA against the true values for the CAR model at 0.06 noise variance.
AlgorithmGenerations (T)Population (Np)Design ParametersBest Fit
r1r2s1s2
AO1000301.3710.7711.7012.339 2.3 × 10 3
401.3690.7771.7152.336 2.3 × 10 3
501.3680.7731.7002.340 2.3 × 10 3
1500301.3660.7711.7442.300 2.4 × 10 3
401.3620.7611.7332.291 2.5 × 10 3
501.3560.7691.7182.305 2.3 × 10 3
2000301.3670.7701.7112.327 2.3 × 10 3
401.3640.7761.7232.317 2.3 × 10 3
501.3630.7721.7062.333 2.3 × 10 3
RSA1000301.2000.6571.1592.531 8.1 × 10 2
401.2020.6001.7891.882 5.9 × 10 2
501.1830.5801.4252.118 6.1 × 10 2
1500301.1200.5821.5841.850 9.7 × 10 2
401.2530.6451.0912.681 7.2 × 10 2
501.2650.6831.4312.396 2.4 × 10 2
2000301.1020.6001.8881.557 1.3 × 10 1
401.3530.7621.3412.726 3.0 × 10 2
501.3420.7581.3492.610 2.7 × 10 2
SCA1000301.3410.7151.6462.278 6.1 × 10 3
401.3540.7741.7252.332 4.5 × 10 3
501.3940.7661.6782.397 3.3 × 10 3
1500301.3760.7591.6622.443 3.2 × 10 3
401.3640.7791.5722.497 2.3 × 10 3
501.3470.7421.6262.355 2.8 × 10 3
2000301.3500.7531.6942.334 2.5 × 10 3
401.3260.7821.6002.389 2.1 × 10 3
501.3120.7041.6912.205 2.3 × 10 3
AOA1000301.4270.8701.4702.808 1.0 × 10 2
401.4300.8111.2952.854 2.3 × 10 2
501.3150.7371.7382.167 6.0 × 10 3
1500301.4190.7891.4462.662 1.2 × 10 2
401.3390.7791.7022.328 4.5 × 10 3
501.3690.7941.0773.000 1.1 × 10 2
2000301.2810.7891.5992.357 1.4 × 10 2
401.3510.6841.5792.307 1.2 × 10 2
501.3990.8061.7542.346 7.6 × 10 3
True Values1.3500.7501.6802.3200
Table 17. Comparison of AO with AOA, SCA, and RSA against the true values for the CAR model at 0.08 noise variance.
Table 17. Comparison of AO with AOA, SCA, and RSA against the true values for the CAR model at 0.08 noise variance.
AlgorithmGenerations (T)Population (Np)Design ParametersBest Fit
r1r2s1s2
AO1000301.3570.7721.7542.271 4.1 × 10 3
401.3700.7721.7152.322 4.1 × 10 3
501.3640.7711.7122.318 4.1 × 10 3
1500301.3800.7841.7372.337 4.2 × 10 3
401.3710.7781.7072.344 4.1 × 10 3
501.3720.7761.7222.328 4.0 × 10 3
2000301.3670.7761.7412.312 4.1 × 10 3
401.3720.7751.7462.304 4.1 × 10 3
501.3580.7761.7632.272 4.1 × 10 3
RSA1000301.1430.5291.5751.928 1.0 × 10 1
401.1480.6042.1351.461 1.4 × 10 1
501.1410.5621.2832.256 1.0 × 10 1
1500301.1200.5981.7521.709 1.1 × 10 1
401.2770.6901.3182.498 3.6 × 10 2
501.2930.6941.2102.656 4.7 × 10 2
2000301.2050.6611.5492.053 7.1 × 10 2
401.3230.7061.2752.556 4.2 × 10 2
501.3040.7331.5362.333 2.3 × 10 2
SCA1000301.4160.8311.6232.574 7.9 × 10 3
401.3590.8061.6882.380 6.4 × 10 3
501.3500.7381.7052.289 4.1 × 10 3
1500301.3640.8011.6022.486 4.7 × 10 3
401.3750.7881.6792.379 4.0 × 10 3
501.3170.7551.6952.255 5.3 × 10 3
2000301.3440.7731.6232.366 3.9 × 10 3
401.3990.7781.6582.429 4.5 × 10 3
501.3400.7131.6672.250 5.1 × 10 3
AOA1000301.4270.8701.4702.808 2.1 × 10 2
401.4300.8111.2952.854 1.0 × 10 2
501.3150.7371.7382.167 1.1 × 10 2
1500301.4190.7891.4462.662 1.4 × 10 2
401.3390.7791.7022.328 1.3 × 10 2
501.3690.7941.0773.000 1.3 × 10 2
2000301.2810.7891.5992.357 7.6 × 10 3
401.3510.6841.5792.307 5.7 × 10 3
501.3990.8061.7542.346 9.8 × 10 3
True Values1.3500.7501.6802.3200
Table 18. Comparison of AO with RSA, SCA, and AOA against average fit for the CAR model at 0.04 noise variance.
Table 18. Comparison of AO with RSA, SCA, and AOA against average fit for the CAR model at 0.04 noise variance.
Generations (T)Population (Np)AORSAAOASCA
100030 3.8 × 10 3 6.1 × 10 1 4.1 × 10 2 1.0 × 10 1
40 2.0 × 10 3 2.5 × 10 1 4.7 × 10 2 6.4 × 10 2
50 1.8 × 10 3 2.7 × 10 1 2.8 × 10 2 1.0 × 10 1
150030 2.0 × 10 3 5.0 × 10 1 3.0 × 10 2 1.2 × 10 1
40 1.7 × 10 3 2.9 × 10 1 3.3 × 10 2 5.9 × 10 2
50 1.7 × 10 3 2.5 × 10 1 3.0 × 10 2 3.7 × 10 2
200030 1.9 × 10 3 3.7 × 10 1 2.9 × 10 2 1.2 × 10 1
40 1.7 × 10 3 2.8 × 10 1 2.4 × 10 2 8.7 × 10 2
50 1.9 × 10 3 2.1 × 10 1 2.2 × 10 2 6.5 × 10 2
Table 19. Comparison of AO with RSA, SCA, and AOA against average fit for the CAR model at 0.06 noise variance.
Table 19. Comparison of AO with RSA, SCA, and AOA against average fit for the CAR model at 0.06 noise variance.
Generations (T)Population (Np)AORSAAOASCA
100030 4.6 × 10 3 3.9 × 10 1 5.0 × 10 2 6.9 × 10 2
40 3.3 × 10 3 2.5 × 10 1 5.1 × 10 2 1.0 × 10 1
50 3.3 × 10 3 2.7 × 10 1 3.2 × 10 2 6.5 × 10 2
150030 3.6 × 10 3 6.5 × 10 1 3.6 × 10 2 1.7 × 10 1
40 3.4 × 10 3 2.6 × 10 1 3.5 × 10 2 1.9 × 10 1
50 2.9 × 10 3 2.3 × 10 1 2.8 × 10 2 7.8 × 10 2
200030 3.1 × 10 3 3.4 × 10 1 4.0 × 10 2 1.6 × 10 1
40 2.8 × 10 3 5.4 × 10 1 3.5 × 10 2 7.0 × 10 2
50 2.7 × 10 3 2.1 × 10 1 2.3 × 10 2 7.1 × 10 3
Table 20. Comparison of AO with RSA, SCA, and AOA against average fit for the CAR model at 0.08 noise variance.
Table 20. Comparison of AO with RSA, SCA, and AOA against average fit for the CAR model at 0.08 noise variance.
Generations (T)Population (Np)AORSAAOASCA
100030 6.4 × 10 3 4.3 × 10 1 4.3 × 10 2 2.3 × 10 1
40 5.0 × 10 3 6.7 × 10 1 2.5 × 10 2 1.0 × 10 1
50 5.0 × 10 3 4.3 × 10 1 3.1 × 10 2 1.0 × 10 1
150030 5.1 × 10 3 3.0 × 10 1 3.5 × 10 2 9.2 × 10 2
40 4.7 × 10 3 2.5 × 10 1 3.1 × 10 2 6.5 × 10 2
50 4.8 × 10 3 2.9 × 10 1 3.9 × 10 2 1.1 × 10 1
200030 4.6 × 10 3 4.3 × 10 1 2.9 × 10 2 1.2 × 10 1
40 4.6 × 10 3 2.7 × 10 1 2.6 × 10 2 1.5 × 10 1
50 4.5 × 10 3 2.6 × 10 1 2.5 × 10 2 1.3 × 10 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Raja, M.A.Z.; Cheema, K.M.; Milyani, A.H. Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems. Mathematics 2022, 10, 1749. https://doi.org/10.3390/math10101749

AMA Style

Mehmood K, Chaudhary NI, Khan ZA, Raja MAZ, Cheema KM, Milyani AH. Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems. Mathematics. 2022; 10(10):1749. https://doi.org/10.3390/math10101749

Chicago/Turabian Style

Mehmood, Khizer, Naveed Ishtiaq Chaudhary, Zeshan Aslam Khan, Muhammad Asif Zahoor Raja, Khalid Mehmood Cheema, and Ahmad H. Milyani. 2022. "Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems" Mathematics 10, no. 10: 1749. https://doi.org/10.3390/math10101749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop