Next Article in Journal
Variation of Acoustic Transmission Spectrum during the Muscle Fatigue Process
Next Article in Special Issue
Improvement Technique for Group Search Optimization Using Experimental Design Method
Previous Article in Journal
FL-SDUAN: A Fuzzy Logic-Based Routing Scheme for Software-Defined Underwater Acoustic Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Reptile Search Optimization Algorithm: Application on Regression and Classification Problems

by
Muhammad Kamran Khan
1,
Muhammad Hamza Zafar
2,
Saad Rashid
1,
Majad Mansoor
3,
Syed Kumayl Raza Moosavi
4 and
Filippo Sanfilippo
5,*
1
Faculty of Engineering Sciences, Islamabad Campus, Hamdard University, Islamabad 44000, Pakistan
2
Department of Electrical Engineering, Capital University of Science and Technology, Islamabad 44000, Pakistan
3
Department of Automation, University of Science and Technology of China, Hefei 230027, China
4
School of Electrical and Electronics Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan
5
Department of Engineering Sciences, University of Agder (UiA), NO-4879 Grimstad, Norway
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(2), 945; https://doi.org/10.3390/app13020945
Submission received: 16 December 2022 / Revised: 4 January 2023 / Accepted: 5 January 2023 / Published: 10 January 2023
(This article belongs to the Special Issue The Development and Application of Swarm Intelligence)

Abstract

:
The reptile search algorithm is a newly developed optimization technique that can efficiently solve various optimization problems. However, while solving high-dimensional nonconvex optimization problems, the reptile search algorithm retains some drawbacks, such as slow convergence speed, high computational complexity, and local minima trapping. Therefore, an improved reptile search algorithm (IRSA) based on a sine cosine algorithm and Levy flight is proposed in this work. The modified sine cosine algorithm with enhanced global search capabilities avoids local minima trapping by conducting a full-scale search of the solution space, and the Levy flight operator with a jump size control factor increases the exploitation capabilities of the search agents. The enhanced algorithm was applied to a set of 23 well-known test functions. Additionally, statistical analysis was performed by considering 30 runs for various performance measures like best, worse, average values, and standard deviation. The statistical results showed that the improved reptile search algorithm gives a fast convergence speed, low time complexity, and efficient global search. For further verification, improved reptile search algorithm results were compared with the RSA and various state-of-the-art metaheuristic techniques. In the second phase of the paper, we used the IRSA to train hyperparameters such as weight and biases for a multi-layer perceptron neural network and a smoothing parameter (σ) for a radial basis function neural network. To validate the effectiveness of training, the improved reptile search algorithm trained multi-layer perceptron neural network classifier was tested on various challenging, real-world classification problems. Furthermore, as a second application, the IRSA-trained RBFNN regression model was used for day-ahead wind and solar power forecasting. Experimental results clearly demonstrated the superior classification and prediction capabilities of the proposed hybrid model. Qualitative, quantitative, comparative, statistical, and complexity analysis revealed improved global exploration, high efficiency, high convergence speed, high prediction accuracy, and low time complexity in the proposed technique.

1. Introduction

Metaheuristics are stochastic search algorithms inspired by the natural phenomenon of biological evolution and human behaviors. Metaheuristics (MH) are simple, flexible, and derivative-free techniques that can efficiently solve various challenging, non-convex optimization problems. The meta-heuristic algorithm solves the optimization problem by considering only the input and output of the base system defined as an optimization problem and does not require a strict mathematical model. Therefore, metaheuristic algorithms are very successful in solving real-world optimization problems with complex information [1,2,3].
The meta-heuristic techniques can be divided into two main classes: evolutionary algorithms (EA) and swarm intelligence (SI) algorithms. EAs are based on the theory of evolution and natural selection, which is a way of explaining how species change over time. Examples of classical evolutionary computing techniques are genetic algorithm (GA), evolution strategy (ES), simulated annealing (SA), differential evolution (DE), and genetic programming (GP) [4,5]. SI algorithms mimic social interactions in nature (such as animals, birds, and insects) [6]. Some of the well-known SI methods include particle swarm optimization (PSO), salp swarm optimization (SSA) [7], ant colony optimization (ACO) [8], firefly algorithm (FA), and whale optimization algorithm (WOA) [9].
There has been a sudden rise in the popularity of machine learning-based techniques within the past few decades. The performance of many areas, such as self-driving vehicles, medical, industrial manufacturing, image recognition, etc., has been greatly improved because of machine learning [10]. Machine learning is a computational process, and its main objective is to recognize different patterns in a relatively random set of data. Several machine learning methods like logical regression, naïve Bayes (NB), k-nearest neighbors (k-NN), decision trees, artificial neural network (ANN), and support vector machines (SVM) [11,12,13] are available in the literature for solving various classification and regression problems.
ANNs are one of the most famous and practical machine learning methods developed to solve complex classification and regression problems. An ANN is a computational model which is biologically inspired. It consists of neurons, which are the processing elements, and the connections between them, which have some values known as weights [14]. Neurons are usually connected by these weighted links over which information can be processed. The main characteristics of a neural network are learning and adaptation, robustness, storage of information, and information processing [15]. ANNs are widely used in areas where traditional analytical models fail because either the data is not precise or the relationship between different elements is very complex. ANNs are used in pattern recognition [16,17,18], signal processing [19], intelligent control [20], and fault detection in electrical power systems [21,22,23]. Therefore, by applying ANNs to different fields, we can optimize the performance, and this has been a major research focus for the last few years.
The performance of a neural network depends upon the construction of the network, the algorithms used for training, and the choice of the parameters involved. Usually, gradient-based methods like backpropagation, gradient descent, Levenberg Marquardt back propagation (LM), and scaled conjugate gradient (SCG) are used to train neural networks. However, these classical methods are highly dependent on initial solutions and may trap in the local minima resulting in performance degradation [24]. Therefore, to enhance performance, evolutionary computing techniques can be applied to train neural networks. These networks are known as Evolutionary Neural Networks.
With the rapid development in computing techniques, a variety of metaheuristic search methods have been used for the optimal training of neural networks [24,25,26]. The basic inspiration for these metaheuristic search methods is the natural phenomenon of biological evolution and human behaviors. Metaheuristic algorithms consider different parameters of ANN as an optimization model and then make an effort to find a near-optimal solution [27]. Due to the added benefit of the powerful global and local searching capabilities of metaheuristic algorithms, the hybrid ANN is becoming a great tool for solving different classification and regression problems.

1.1. Contributions and Organization

According to the no free lunch theorem (NFL) [28], no metaheuristic algorithm can efficiently solve all optimization problems. An algorithm that gives excellent results for one optimization problem may perform poorly when applied to another optimization problem. This inequality was the motivation for this research. The main work of this paper was based on the refinement of the biologically inspired reptile search algorithm (RSA) [29]. In an extensive application, we applied this algorithm to solving various regression and classification problems.
RSA is a newly developed optimization technique that can efficiently solve various optimization problems. However, when applied to highly multidimensional nonconvex optimization problems like neural network training, the algorithm shows prominent shortcomings, such as stagnation at local minima, slow convergence speed, and high computational complexity. Therefore, in this paper, some improvements are proposed to make up for the above-mentioned drawbacks.
The reason for local minima stagnation is the lack of exploration in the high-walking stage of the RSA algorithm. Local minima trapping can be avoided if the solution candidates explore the search space as widely as possible. Therefore, to enhance exploration, a sine operator was included in the high walking stage of the RSA algorithm. The modified sine operator with enhanced global search capabilities avoids local minima trapping by conducting a full-scale search of the solution space.
The second innovation was designed to enhance the convergence capabilities in the hunting phase of the RSA algorithm. The Levy flight operator with jump size control factor ζ was used to increase the exploitation capabilities of the search agents. The lower value of ζ results in small random steps in the hunting phase of IRSA. This enables the solution candidates to search the area nearest to the obtained solution, which greatly improves the convergence capabilities of the algorithm.
As compared to the other state-of-the-art optimization algorithms, the computational complexity of RSA is very high. This time complexity limits the application of the algorithm in solving high-dimensional complex optimization algorithms. In the original RSA, the major causes of the complexity are the hunting operator μ j , k and the reduce function R j , k . Especially while computing R j , k , division by a small value, ϵ , greatly increases the computational time of the algorithm. However, in the proposed algorithm with the above-mentioned improvisation, there was no need to calculate these complex operators, and they were excluded from the algorithm. This results in an almost 3-to-4-fold reduction in time complexity.
Thus, the proposed novel IRSA algorithm gives two benefits. First, there was a 10 to 15% improved performance as compared to the original RSA (experimentally verified in Section 3.2). Second, there was a 3-to-4-fold reduction in time complexity as compared to the original RSA (experimentally verified in Section 3.3). The major contributions of the research are:
  • An improved reptile search algorithm (IRSA) based on a sine operator and Levy flight was proposed to enhance the performance of the original RSA.
  • The proposed IRSA was evaluated using 23 benchmark test functions. Various qualitative, quantitative, comparative, statistical, and complexity analyses were performed to validate the positive effects of the improvisations.
  • This research also proposed a hybrid methodology that integrates Multi-Layer Perceptron Neural Network with the improvised RSA for solving various classification problems.
  • Finally, the IRSA was applied to train a radial basis function neural network (RBFNN) for short-term wind and solar power predictions.
The remaining article is assembled into seven sections. The proposed methodology is explained in Section 3. Section 4 delineates the effectiveness of the proposed IRSA through the performance of various experimental and statistical studies, along with comparisons. Section 4 also gives a brief rundown on the theoretical basis of the multi-layer perceptron neural network (MLPNN), radial basis function neural network (RBFNN), and proposed improved reptile search algorithm based neural network (IRSANN). Section 5 and Section 6 describe applications of the proposed technique in solving real-world classification and regression problems. Finally, Section 7 concludes the research.

1.2. Literature Survey

In the literature, various metaheuristic optimization techniques have been proposed and investigated for the training of the artificial neural network. The research in [30] used a hybrid AI model to predict the speed of wind at different coastal locations where PSO was applied to train an ANN. A comparison was made between the support vector machine, ANN, and hybrid ANN based wind speed prediction models, and it was observed that the root mean square error (RMSE) was the minimum for the hybrid PSOANN model. In [31], a hybrid ANN-PSO model was used to extract maximum power from PV systems. Different test scenarios were considered, and it was observed that the maximum power was tracked by the ANN-PSO technique. The research in [32] explored a hybridized ANN-BPSO (binary PSO) model to control renewable energy resources in a virtual power plant. Simulation results clearly showed that the best energy management schedule was obtained by the hybrid ANN-BPSO algorithm. The work in [33] developed a hybrid model of an ANN and ant lion optimization (ALO) algorithm for the prediction of suspended sediment load (SSL). In [34], a new hybrid intelligent artificial neural network (NHIANN) with a cuckoo search algorithm (CS) was proposed to develop a forecasting model of criminal-related factors. Based on different performance parameters like mean absolute percentage error (MAPE), it was observed that not only did CS-NIHANN train the model faster but also obtained the optimal global solution.
In recent years, the trend of combining various metaheuristic techniques for improved performance has risen, and many cooperative metaheuristics techniques have been proposed in the literature [35]. To predict energy demand, the genetic algorithm and particle swarm optimization were combined in [36]. Different parameters, like electricity consumption per capita, income growth rate, etc., were used as input for the hybrid prediction model. It was concluded that the performance of the ANN-GA-PSO based model was better than the ANN-GA or ANN-PSO models. In [37], a modified butterfly optimization (MBO) position updating mechanism was improved by utilizing the exploitation capabilities of the multi-verse optimizer (MVO). The work in [38] combined the exploration capabilities of a sine cosine algorithm (SCA) with a dynamic group-based cooperative optimization algorithm (DGCO) for efficient training of a radial basis function neural network. In [39], the performance of a salp swarm algorithm was greatly improved by integrating Levy flight and sine cosine operators. In [40], an improved Jaya algorithm was proposed, which used Levy flight to provide a perfect balance between exploration and exploitation.

2. Proposed Methodology

2.1. Reptile Search Algorithm

The reptile search algorithm is a metaheuristic technique inspired by the hunting behaviors of crocodiles in nature [29]. The working of the RSA depends upon two phases: the encircling phase and the hunting phase. The RSA switches between the encircling phase and the hunting search phase, and the shifting between different phases is performed by dividing the number of iterations into four parts.

2.1.1. Initialization

The reptile search algorithm starts by generating a set of initial solution candidates stochastically using the following equation:
x j k = r a n d × U b L b +   L b        k = 1 ,   2 ,   n
where x j k = initialization matrix, j = 1,2…. P. P represents population size (rows of the initialization matrix), and n represents dimensions (columns of the initialization matrix) of the given optimization problem. L b , U b , and r a n d represent the lower bound limit, upper bound limit, and randomly generated values.

2.1.2. Encircling Phase (Exploration)

The encircling phase is essentially an exploration of a high-density area. In the course of the encircling phase, high walking and belly walking, which are based on crocodile movements, play a very important role. These movements do not help in catching prey but help in discovering a wide search space.
x j k τ + 1 = B e s t k τ × μ j k τ × β R j k τ   × r a n d ,       τ T 4
x j k τ + 1 = B e s t k τ ×   x r 1   , k   × E S τ   ×   r a n d ,      τ 2 T 4   and   τ > T 4
where B e s t k τ is the optimal solution obtained at the k th   position, rand represents a random number, τ shows the present iteration number, and the maximum number of iterations is represented by T. μ j , k is the value of the hunting operator of the j th   solution at the k th   position. The value of μ j , k is determined as shown below:
μ j , k = B e s t k τ ×   P j , k
where β is a sensitivity parameter, and it explains the exploration accuracy. Another function named R j , k , whose purpose is to reduce the search space area, can be calculated as follows:
R j , k = B e s t k τ P r 2 , k B e s t k τ + ϵ
where r 1   is the value of the random number that lies between 1 and N. Here, N represents the total number of candidate solutions. z r 1 , l represents a random position for the k th   solution. r 2 is also an arbitrary (random) number ranging between 1 and N, while ϵ represents a value of a small magnitude. E S   τ , known as Evolutionary Sense, is a probability-based ratio. The Evolutionary Sense can be mathematically represented as follows:
E S τ   =   2   ×   r 3   ×   1 1 T
where r 3 represents a random number. P j , k can be computed as:
P j , k   =   α + x j , k M x j B e s t k τ   x ( U b k L b k + ϵ
where   α is a sensitivity limit that controls the exploration accuracy.   M x j   is the average position of the j t h   solution and can be calculated as:
M x j = 1 n   k = 1 n x j , k

2.1.3. Hunting Phase (Exploitation)

The hunting phase, like the encircling phase, has two strategies, namely hunting coordination and cooperation. Both these strategies are used to traverse the search space locally and help target the prey (find an optimum solution). The hunting phase is also divided into two portions based on the iteration. The hunting coordination strategy is conducted for iterations ranging from τ 3 T 4 and > 2   T 4 , while the hunting cooperation is conducted from τ T and τ > 3   T 4 . Stochastic coefficients are used to traverse the local search space to generate optimal solutions. Equations (9) and (10) are used for the exploitation phase:
x j , k τ + 1 = B e s t k τ × ( P j , k τ )   ×   r a n d ,     τ 3 T 4   and   τ > 2 T 4
x j , k τ + 1 = B e s t k τ μ j , k τ ×   ϵ R j , k τ ×   r a n d , τ T   and   τ > 3 T 4
where B e s t k τ   is the k   position in the best-obtained solution in the current iteration. Similarly, μ j , k represents the hunting operator, which is calculated by Equation (4).

2.2. Proposed Improved Reptile Search Algorithm (IRSA)

The RSA is a newly developed optimization technique that can efficiently solve various optimization problems. However, while solving high dimensional nonconvex optimization problems, the RSA poses some drawbacks, such as slow convergence speed, high computational complexity, and local minima trapping [41,42]. Therefore, to overcome these issues, some adjustments are proposed to the original RSA algorithm.
Avoiding local minima trapping requires the solution candidates to explore the search space as widely as possible. Therefore, to enhance exploration, a sine operator was included in the high walking stage of the RSA algorithm. This adjustment was inspired by the dynamic exploration mechanism in the sine cosine algorithm (SCA) [43]. The sine operator provides a global exploration capability. Therefore, the inclusion of a sine operator in the IRSA can avoid local minima trapping by conducting a full-scale search of the solution space. Using the sine operator, in IRSA Equation (2) was replaced with the following equation.
x j k τ + 1 = B e s t k τ + ( r 1 × sin r a n d × | r 2 × B e s t k τ x j k | , f o r τ T 3
where r 1 , r 2 , and rand are randomly chosen numbers between 0 and 1. x j k is the current position, and   B e s t k is the best solution. The Levy flight [44] is a random process that follows the Levy distribution function. As suggested by yang [45]:
l e v y = 0.01 × u v 1 ζ
where u and v obey normal distribution.
u ~ 0 , σ u 2 ,   v ~ 0 , σ v 2
σ u = δ 1 + ζ s i n π ζ 2 δ 1 + ζ 2 ζ 2 ζ 1 2 1 / ζ
σ v = 1
where δ   is the standard gamma function.   ζ is an important parameter in Levy fight that determines jump size. The lower value of ζ results in small random steps. This enables the solution candidates to search the area nearest to the obtained solution, which improves exploitation capabilities. This improved exploitation guarantees global convergence. Therefore, instead of Equation (10) the Levy operator is used to update the position in the final stages of IRSA
x j k τ + 1 = B e s t k τ + r a n d n × l e v y   ( x j k B e s t k τ ) ,   for   τ T   and   τ > 3 T 4
where ⊕ designates entry-wise multiplication, and randn is a uniformly distributed random number. The Levy random walk is represented in Figure 1.
These improvisations greatly reduce the complexity of the algorithm. In the original RSA, Equations (4) and (5) are highly complex and increase the time complexity of the algorithm. However, with the above-mentioned adaptations, we can exclude these equations from the algorithm. This means that the proposed IRSA algorithms do not have to compute these equations, which results in an almost 3-to-4-fold reduction in time complexity.
Improved global exploration, high efficiency, high convergence speed, and low time complexity are the improvements observed in the proposed technique. Taken together, the pseudocode and flowchart of the IRSA are shown in Algorithm 1 and Figure 2.
Algorithm 1 Pseudocode of IRSA
Initialize random population x
Initialize iteration counter τ = 0, maximum iteration T, alpha, beta
while τ < T
Evaluate fitness of potential candidates
Determines the best solution
Update Es, P j , k using Equations (6) and (7)
for j = 1: p
    for k = 1: n
        If τ T 3
          Solve using Equation (11)
        else if   τ 2 T 4   a n d   τ > T 3
          Solve using Equation (3)
        else if   τ 3 T 4   a n d   τ > 2 T 4
          Solve using Equation (9)
        else
          Solve using Equation (16)
 
         end if
   end for
end for
t = t + 1
end while
Return best solution

3. Experimental Verification Using Benchmark Test Functions

In this section, the improved reptile search algorithm (IRSA) was tested using 23 standard benchmark functions [46]. These functions were unimodal, multimodal, and fixed-dimension minimization problems. The specifications of the test functions are provided in Table A1, Table A2 and Table A3. For further verification, the IRSA results were compared with the RSA algorithm and also with other classical and state-of-the-art metaheuristic techniques, like barnacle mating optimizer (BMO) [47], particle swarm optimization (PSO) [48], grey wolf optimizer (GWO) [49], arithmetic optimization algorithm (AOA) [50], and dynamic group-based cooperative optimization algorithm (DGCO) [51]. The numbers of iterations taken were 500 and 350, with a fixed population size of 50. For IRSA, 𝛼 and 𝛽 were set to 0.1. Furthermore, statistical analysis was performed by considering 30 runs for various performance measures like best, worse, and average values and standard deviation (STD). Lastly, time complexity analysis was performed on CEC-2019 test functions. The simulations were performed using a desktop computer Core i5 with 12 GB RAM. The software used for the simulation was MATLAB (R2018a).

3.1. Qualitative Analysis

Figure 3 gives an indication of different qualitative measures used to evaluate the convergence of the RSA. The first column of each figure indicates the shape of the test functions in two dimensions to depict the topographic anatomy. The second column shows the search history, which depicts the exploration and exploitation behavior of the algorithm. The convergence curve of unimodal functions was fluid and continuous, while for multimodal functions, the curve improved in steps as they were more complex functions. It is clearly visible that the performance of the IRSA in finding an optimum solution improved significantly with an increase in the number of iterations.

3.2. Comparative Analysis

Table 1 and Table 2 show the comparative analysis between the IRSA and other MH algorithms for unimodal, multimodal, and fixed-dimension functions. For unimodal and multimodal functions, each algorithm was evaluated considering population size and iterations of 50 and 500, respectively. For statistical analysis, each algorithm ran 30 times to compute the best value, average value, worst value, and standard deviations [52]. The IRSA showed a great ability to find the optimum value on 16 out of 23 test functions, which represented nearly 70%, and it ranked second among five other test functions. The convergence curves are depicted in Figure 4. These results demonstrate the positive effects of the improvisations in the form of improved global convergence, enhanced optimization accuracy, and increased stability.
Table 3 delineates the performance of the IRSA and other MH techniques at varying dimensions of 30, 100, and 500, considering 300 iterations. From the table, it can be observed that the IRSA provides the best results on 33 out of a total of 39 test evaluations, which represented nearly 84%, and it ranks second of five other evaluations. These results clearly demonstrate the effectiveness of the proposed algorithm in solving high-dimensional complex optimization problems.

3.3. Time Complexity Analysis

Time complexity usually depends on the initialization process, the number of iterations, and the solution update mechanism. In this section, we performed a time complexity analysis of the IRSA to determine the effect of improvisations on the computational time of the algorithm. For this purpose, we used standard benchmark test functions called CEC-2019. Table A4 delineates the specifications of these functions. The following equation is used to determine the complexity of the algorithm [53,54].
T = T ˜ T 1 T °
where T ° is the computational time of a specific mathematical algorithm. T ° was calculated to be 0.09 s; T 1   is the computational time of a CEC-2019 test function for 15,000 iterations; T ˜ is the mean computational time of the MH technique used to solve CEC-2019 test functions for 10 runs considering 15,000 iterations
These computations were performed using a desktop computer Core i5 with 12 GB RAM considering a population size of 10. Table 4 and Table 5 describe the result of the analysis. The tabular results clearly show that the improvisations greatly reduce the complexity of the RSA algorithm.

4. IRSA for Neural Network Training

4.1. Multi-Layer Perceptron Neural Network (MLPNN)

Multi-Layer Perceptron Neural Network (MLPNN) implementing backpropagation is one of the most widely used neural network models. In MLPNN, the input layer acts as a receiver and provides the received information to the first hidden layer by passing through weights and bias, as shown in Figure 5. The information is then processed by one or multiple hidden layers and is provided to the output layer. The job of the output layers is to provide results by combining all the processed information. The activation signal for each neuron present in the hidden layer is:
a j = i = 1 m w j i . x i + b j
where w j i   = weight coefficient matrix between m input, and n hidden layer neurons and can be written as:
w j i = w 11 w 12 w 1 m w 21 w 22 w 2 m w j 1 w j 2 w n m
Assuming the input layer has m neurons, the input vector x i can be written as:
x i = x 1 , x 2 , . , x m T
If the hidden layer has n neurons, then the activation signal vector A can be written as:
A = a 1 , a 2 , , a j , a n T
These activation signals are applied to the neuron of the hidden layer. Based on the type of activation function, the active neuron will generate a decision signal d j :
d j = φ a j
For the sigmoid function, the decision signal can be calculated as:
d j = 1 1 + e a j
Once decision signals for each neuron in the hidden layers are determined, these signals are multiplied with the output weight coefficient matrix v k j to generate an estimated output as represented by the equation:
y k = j = 1 n v k j . d j + b k
The purpose of training the MLP neural network is to find the best weight and biases to maximize classification accuracy.
Figure 5. MLP Neural Network.
Figure 5. MLP Neural Network.
Applsci 13 00945 g005

4.2. Radial Basis Function Neural Network (RBFNN)

Radial Basis Function Neural Network is a three-layered universal approximator. The input layer serves as a means to connect with the environment. No computation is performed at this layer. The hidden layer consists of neurons and performs a non-linear transformation using a radial basis function. Essentially, the hidden layer transforms the pattern into higher dimensional space to make it linearly separable. The value of the ith hidden layer neuron can be written as follows:
Φ i = e X ¯ u ¯ i 2 2 . σ i   2
where X ¯ is the input vector, u ¯ i is the ith neuron’s prototype vector, σ i is the ith neuron’s bandwidth, and Φ i is the ith neuron’s output.
The output layer performs different linear computations, which is a combination of the input and weight vector. These computations can be represented mathematically as:
y = i n w i Φ i
where w i is the weight connection, Φ i is the ith neuron’s output from the hidden layer, and y is the prediction result. The structure of the RBFNN is shown in Figure 6.

4.3. Training of MLPNN and RBFNN Using the Proposed IRSA

The training of parameters, such as weight and biases for MLPNN and the smoothing parameter (σ) for RBFNN, is a highly complex optimization problem. Inefficient training of these parameters results in low classification and prediction accuracy. Usually, gradient-based methods are used to train neural networks. However, these classical methods are highly dependent on initial solutions and may trap in the local minima resulting in performance degradation. Therefore, to minimize the prediction errors, the IRSA was used to determine weight, biases, and σ. Figure 7 shows the flowchart for the proposed IRSANN algorithm.

5. IRSA for Solving Classification Problems

In order to evaluate the performance of the IRSA, eight datasets obtained from [55,56] were used. Table 6 gives a brief description of the datasets. Each dataset was divided into a training set and a testing set. Almost 67% of the data was used for training the ANN, and the remaining 33% was used for testing. Sigmoid was used as an activation function for the hidden layers. Normalized mean squared error, as described by Equation (27), was used as a cost function.
N R M S E = 1 T ˜ 1 N i = 1 N T i P i 2
where T i and p i are the true and predicted values. T ˜ represents the mean of true value. N represents the total number of data samples. As there is no fixed rule to select the number of neurons, we selected neurons based on the formula [37]:
h = 2 × f + 1
where f represents the number of input features, and h shows the number of selected neurons.
After training the MLP network, different convergence curves for IRSA, RSA, BMO, AOA, and PSO are displayed in Figure 8. The convergence of IRSA is at par with all the other metaheuristic algorithms. On the other hand, the convergence of the classical RSA was average. The accuracy while using the IRSA algorithm was better than the accuracy of other algorithms. The IRSA provided the best results for most of the datasets. Additionally, the low standard deviation of the IRSA was an indication of its strength and stability. The results in Table 7 clearly show that the accuracy of the proposed method is higher than the other four classifiers. Table 8 shows the performance of the comparative techniques in testing the dataset, and Table 9 shows the cost function comparison of all the techniques with best, average, and STD values.

Statistical Indicators for Classification

Precision, recall, and F1 score are the parameters used to evaluate the performance of the different techniques used in this paper. Precision and recall are defined in (29) and (30) respectively:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where TP (true positive), FP (False Positive), and FN (False Negative) are determined by computing the confusion matrix. The ideal value for precision and recall is one. The F1 score can be defined as:
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Table 10 represents the statistical results for the eight selected datasets. The precision, Recall, and F 1 of the proposed method were higher for six out of the eight datasets. Considering all the parameters mentioned above, we can conclude that the proposed IRSANN method provides optimum results and outperforms all other comparable MH techniques.

6. IRSA for Solving the Regression Problems

Wind and solar energy are amongst the most promising renewable energy sources. However, wind and solar energy production are highly dependent on stochastic weather conditions. This uncertainty makes it challenging to integrate these renewable energy resources with the grid. Accurate energy prediction results in economical market operations, reliable operation planning, and efficient generation scheduling [57]. This demands highly efficient forecasting of wind and solar energy. Therefore, the IRSA is proposed to train a radial basis neural network (RBFNN) for short-term wind and solar power prediction. Normalized mean squared error, as described by equation (27), was used as a cost function. Almost 67% of the data was used for training the ANN, and the remaining 33% was used for testing.

6.1. Wind Power Prediction

For wind power prediction, SCADA systems were used to record the wind speed, wind direction, and power generated by the wind turbine [58,59]. The measurements were taken at 10-min intervals. Figure 8 and Figure 9 show examples of the 48 h-ahead power predictions obtained by the proposed IRSA for both winter and summer seasons. A comparison between true wind power and predicted wind power by all five techniques for the winter and summer seasons is also depicted in Figure 9 and Figure 10. Simulation results clearly indicate the superior prediction capabilities of the proposed technique for both winter and the highly uncertain summer season.

6.2. Solar Power Prediction

The data set used for solar power prediction is available at [60]. The data consisted of two files. The first file contained dc and ac power generation data, and the second file contained sensor readings of ambient temperature, module temperature, and irradiance [61]. For this work, both files were combined using MATLAB code to create a dataset for solar power prediction. The readings were taken at a time interval of 15 min. The input features included ambient temperature, module temperature, and irradiance, while the output feature was the ac power. Figure 11 shows an example of the 48 h-ahead power predictions obtained by using various MH techniques. Simulation results clearly indicate the superior prediction capabilities of the proposed technique.

6.3. Statistical Indicators for Regression

In order to compare the predictive capabilities of the selected models, we used several statistical measures, such as root mean square error (RMSE), relative error (RE), and the coefficient of determination ( R 2 ) [38]. These indicators are described by the following equations:
R M S E = j = 1 n y j P j 2 n
R E = j = 1 n y j P j y j
R 2 = 1 j = 1 n y j P j 2 j = 1 n y j y j ¯ 2
where y j and p j are the true and predicted values, respectively. y j ¯   is the mean of the value, and n represents the total number of data samples.
The small values of RMSE and RE and the higher values of R 2   give clear indications of the improved accuracy of the proposed model. These values are represented in Table 11. According to this table, the IRSA-RBFNN provides the highest values of R 2 and lowest values of RMSE for both wind and solar prediction, proving that the performance of IRSA is best when compared to other prediction models. The PSO-RBFNN model provides a slightly lower prediction efficiency. BMO-RBFN and RSA-RBFN are ranked third and fourth in terms of prediction performance [62,63].

7. Conclusions

The main work of this paper was based on the refinement of the biologically inspired reptile search algorithm (RSA). The main objective was to enhance the exploration phase so that local minima convergence could be avoided. This was achieved by including a sine operator, which avoids local minima trapping by conducting a full-scale search of the solution space. Furthermore, the Levy flight with small steps was introduced to enhance exploitations. These improvisations enhanced not only the performance but also the results via a 3 to 4-fold reduction in the time complexity of the algorithm. The proposed improved reptile search algorithm (IRSA) was tested using various unimodal, multimodal, and fixed-dimension test functions. Finally, as an extensive application, we applied this algorithm to solving real-world classification and regression problems. The model successfully tackled the classification and regression tasks. Statistical, qualitative, quantitative, and computational complexity tests were performed to validate the effectiveness of the proposed improvisations. Based on the results, we can positively conclude that the proposed improvisations are effective in enhancing the performance of the RSA algorithm.
In the future, the IRSA could be explored to train various types of neural networks. Furthermore, applying the IRSA to solve various optimization problems in different domains (i.e., feature selection, maximum power point tracking (MPPT), smart grids, image processing, power control, robotics, etc.) would be a valuable contribution.

Author Contributions

Conceptualization, methodology, software, M.K.K.; validation, formal analysis, M.H.Z.; investigation, resources, S.R.; data curation, visualization, M.M. and S.K.R.M.; supervision, project administration, funding acquisition, F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Top Research Centre Mechatronics (TRCM), University of Agder (UiA), Norway, https://www.uia.no/en/research/priority-research-centres/top-researchcentre-mechatronics-trcm (accessed on 15 December 2022).

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Utilized Unimodal Test Functions.
Table A1. Utilized Unimodal Test Functions.
Function DescriptionDimRange   f m i n
f 1 x = i = 1 n x i 2 500, 100, 50, 30[−100, 100]0
f 2 x = i = 0 n x i + i = 0 n x i 500, 100, 50, 30[−10, 10]0
f 3 x = i = 1 d j = 1 i x j 2 500, 100, 50, 30[−100, 100]0
f 4 x = m a x i   x i ,   1 i n 500, 100, 50, 30[−100, 100]0
f 5 x = i = 1 n 1 100 x i 2 x i + 1 2 + 1 x i 2 500, 100, 50, 30[−30, 30]0
f 6 x = i = 1 n x i + 0.5 2 500, 100, 50, 30[−100, 100]0
f 7 x = i = 0 n i x i 4 + r a n d 0 , 1 500, 100, 50, 30[−1.28, 1.28]0
Table A2. Utilized Multimodal Test Functions.
Table A2. Utilized Multimodal Test Functions.
Function DescriptionDimRange   f m i n
f 8 x = i = 1 n x i sin x i 500, 100, 50, 30[−100, 100]−418.980 × Dim
f 9 x = i = 1 n x i 2 10 cos 2 π x i + 10 500, 100, 50, 30[−10, 10]0
f 10 x = 20 e 0.2 1 n i = 1 n x i 2 ) e 1 n i = 1 n cos 2 π x i     + 20 + e 500, 100, 50, 30[−100, 100]0
f 11 x = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos x i i 500, 100, 50, 30[−100, 100]0
f 12 x = π n 10 sin π y i +     i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + i = 1 n 1 u x i , 10 , 100 , 4
                here,
              y i = 1 + x i + 1 4 ,
           u x i , a , k , m K x i a m      i f   x i > a 0     a x i a K x i a m i f a x i
500, 100, 50, 30[−30, 30]0
f 13 x = 0.1 ( sin 2 3 π x i          + i = 1 n ( x i 1 ) 2 1 + sin 2 3 π x i + 1          + x n 1 2 1 + sin 2 2 π x n ) + i = 1 n u ( x i , 5 , 100 , 4 ) 500, 100, 50, 30[−100, 100]0
Table A3. Utilized Fixed Dimension Test Functions.
Table A3. Utilized Fixed Dimension Test Functions.
Function DescriptionDimRange   f m i n
f 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 1 2[−65, 65]0.998
f 15 x = i = 1 n a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−1, 1]0
f 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2      + 4 x 2 4 2[−5, 5]−1.0316
f 17 x = x 2 5.1 4 π 2 x 1 6 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−4, 4]0.398
f 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 30          + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2   2[−5, 5]3
f 19 x = i = 1 4 c i e i = 1 3 a i j x j p i j 2 3[−5, 5]−3.86
f 20 ( x ) = i = 1 4 c i e i = 1 6 a i j x j p i j 2 6[−5, 5]−1.170
f 21 x = i = 1 5 X a i X a i T + c i 1 4[−5, 5]−10.153
f 22 x = i = 1 7 X a i X a i T + c i 1 4[−5, 5]−10.4028
f 23 x = i = 1 10 X a i X a i T + c i 1 4[−1, 1]−10.536
Table A4. Utilized CEC 2019 Test Functions.
Table A4. Utilized CEC 2019 Test Functions.
FunctionDescription   f m i n RangeDim
CEC-1Storn’s Chebyshev polynomial fitting problem1[−8192, 8192]9
CEC-2Inverse Hilbert matrix problem1[−16,384, 16,384]16
CEC-3Lennard–Jones minimum energy cluster1[−4, 4]18
CEC-4Rastrigin function1[−100, 100]10
CEC-5Grienwank function1[−100, 100]10
CEC-6Weierstrass function1[−100, 100]10
CEC-7Modified Schwefel function1[−100, 100]10
CEC-8Expanded Schaffer function1[−100, 100]10
CEC-9Happy CAT function1[−100, 100]10

References

  1. Chong, H.Y.; Yap, H.J.; Tan, S.C.; Yap, K.S.; Wong, S.Y. Advances of metaheuristic algorithms in training neural networks for industrial applications. Soft Comput. 2021, 25, 11209–11233. [Google Scholar] [CrossRef]
  2. Osaba, E.; Villar-Rodriguez, E.; Del Ser, J.; Nebro, A.J.; Molina, D.; LaTorre, A.; Suganthan, P.N.; Coello, C.A.C.; Herrera, F. A Tutorial On the design, experimentation and application of metaheuristic algorithms to real-World optimization problems. Swarm Evol. Comput. 2021, 64, 100888. [Google Scholar] [CrossRef]
  3. Khan, M.K.; Zafar, M.H.; Mansoor, M.; Mirza, A.F.; Khan, U.A.; Khan, N.M. Green energy extraction for sustainable development: A novel MPPT technique for hybrid PV-TEG system. Sustain. Energy Technol. Assess. 2022, 53, 102388. [Google Scholar]
  4. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  5. Knowles, J.; Corne, D. The Pareto archived evolution strategy: A new baseline algorithm for Pareto multiobjective optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999. [Google Scholar]
  6. Mansoor, M.; Mirza, A.F.; Ling, Q. Harris hawk optimization-based MPPT control for PV Systems under Partial Shading Conditions. J. Clean. Prod. 2020, 274, 122857. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  8. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  10. Wu, Q.; Ma, Z.; Xu, G.; Li, S.; Chen, D. A novel neural network classifier using beetle antennae search algorithm for pattern classification. IEEE Access 2019, 7, 64686–64696. [Google Scholar] [CrossRef]
  11. Amari, S.; Wu, S. Improving support vector machine classifiers by modifying kernel functions. Neural Netw. 1999, 12, 783–789. [Google Scholar] [CrossRef]
  12. Peterson, L.E. K-nearest neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
  13. Rish, I. An empirical study of the I Bayes classifier. In Proceedings of the IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence, Seattle, WA, USA, 8 August 2001. [Google Scholar]
  14. El Naqa, I.; Murphy, M. What Is Machine Learning? In Machine Learning in Radiation Oncology: Theory and Applications; El Naqa, I., Li, R., Murphy, M., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 3–11. [Google Scholar]
  15. Shanmuganathan, S. Artificial Neural Network Modelling: An Introduction. In Artificial Neural Network Modelling; Shanmuganathan, S., Samarasinghe, S., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1–14. [Google Scholar]
  16. Sinha, A.K.; Hati, A.S.; Benbouzid, M.; Chakrabarti, P. ANN-based pattern recognition for induction motor broken rotor bar monitoring under supply frequency regulation. Machines 2021, 9, 87. [Google Scholar] [CrossRef]
  17. D’Addona, D.M.; Ullah, A.M.M.S.; Matarazzo, D. Tool-wear prediction and pattern-recognition using artificial neural network and DNA-based computing. J. Intell. Manuf. 2017, 28, 1285–1301. [Google Scholar] [CrossRef]
  18. Wu, J.; Xu, C.; Han, X.; Zhou, D.; Zhang, M.; Li, H.; Tan, K.C. Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 7824–7840. [Google Scholar] [CrossRef] [PubMed]
  19. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Process. 2021, 151, 107398. [Google Scholar] [CrossRef]
  20. Poznyak, A.; Chairez, I.; Poznyak, T. A survey on artificial neural networks application for identification and control in environmental engineering: Biological and chemical systems with uncertain models. Annu. Rev. Control. 2019, 48, 250–272. [Google Scholar] [CrossRef]
  21. Hussain, M.; Dhimish, M.; Titarenko, S.; Mather, P. Artificial neural network based photovoltaic fault detection algorithm integrating two bi-directional input parameters. Renew. Energy 2020, 155, 1272–1292. [Google Scholar] [CrossRef]
  22. Veerasamy, V.; Wahab, N.I.A.; Othman, M.L.; Padmanaban, S.; Sekar, K.; Ramachandran, R.; Hizam, H.; Vinayagam, A.; Islam, M.Z. LSTM Recurrent Neural Network Classifier for High Impedance Fault Detection in Solar PV Integrated Power System. IEEE Access 2021, 9, 32672–32687. [Google Scholar] [CrossRef]
  23. Jiang, J.; Li, W.; Wen, Z.; Bie, Y.; Schwarz, H.; Zhang, C. Series Arc Fault Detection Based on Random Forest and Deep Neural Network. IEEE Sens. J. 2021, 21, 17171–17179. [Google Scholar] [CrossRef]
  24. Han, F.; Jiang, J.; Ling, Q.H.; Su, B.Y. A survey on metaheuristic optimization for random single-hidden layer feedforward neural network. Neurocomputing 2019, 335, 261–273. [Google Scholar] [CrossRef]
  25. Spurlock, K.; Elgazzar, H. A genetic mixed-integer optimization of neural network hyper-parameters. J. Supercomput. 2022, 78, 14680–14702. [Google Scholar] [CrossRef]
  26. Abd Elaziz, M.; Dahou, A.; Abualigah, L.; Yu, L.; Alshinwan, M.; Khasawneh, A.M.; Lu, S. Advanced metaheuristic optimization techniques in applications of deep neural networks: A review. Neural Comput. Appl. 2021, 33, 14079–14099. [Google Scholar] [CrossRef]
  27. Darwish, A.; Hassanien, A.E.; Das, S. A survey of swarm and evolutionary computing approaches for deep learning. Artif. Intell. Rev. 2020, 53, 1767–1812. [Google Scholar] [CrossRef]
  28. Ho, Y.-C.; Pepyne, D.L. Simple explanation of the no-free-lunch theorem and its implications. J. Optim. Theory Appl. 2002, 115, 549–570. [Google Scholar] [CrossRef]
  29. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  30. Bou-Rabee, M.; Lodi, K.A.; Ali, M.; Ansari, M.F.; Tariq, M.; Sulaiman, S.A. One-month-ahead wind speed forecasting using hybrid AI model for coastal locations. IEEE Access 2020, 8, 198482–198493. [Google Scholar] [CrossRef]
  31. Farayola, A.M.; Sun, Y.; Ali, A. ANN-PSO Optimization of PV systems under different weather conditions. In Proceedings of the 2018 7th International Conference on Renewable Energy Research and Applications (ICRERA), Paris, France, 14–17 October 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  32. Abdolrasol, M.G.; Mohamed, R.; Hannan, M.A.; Al-Shetwi, A.Q.; Mansor, M.; Blaabjerg, F. Artificial neural network based particle swarm optimization for microgrid optimal energy scheduling. IEEE Trans. Power Electron. 2021, 36, 12151–12157. [Google Scholar] [CrossRef]
  33. Banadkooki, F.B.; Ehteram, M.; Ahmed, A.N.; Teo, F.Y.; Ebrahimi, M.; Fai, C.M.; Huang, Y.F.; El-Shafie, A. Suspended sediment load prediction using artificial neural network and ant lion optimization algorithm. Environ. Sci. Pollut. Res. 2020, 27, 38094–38116. [Google Scholar] [CrossRef]
  34. Wongsinlatam, W.; Buchitchon, S. Criminal cases forecasting model using a new intelligent hybrid artificial neural network with cuckoo search algorithm. IAENG Int. J. Comput. Sci. 2020, 47, 481–490. [Google Scholar]
  35. Mehrabi, P.; Honarbari, S.; Rafiei, S.; Jahandari, S.; Alizadeh Bidgoli, M. Seismic response prediction of FRC rectangular columns using intelligent fuzzy-based hybrid metaheuristic techniques. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 10105–10123. [Google Scholar] [CrossRef]
  36. Anand, A.; Suganthi, L. Hybrid GA-PSO optimization of artificial neural network for forecasting electricity demand. Energies 2018, 11, 728. [Google Scholar] [CrossRef] [Green Version]
  37. Faris, H.; Aljarah, I.; Mirjalili, S. Improved monarch butterfly optimization for unconstrained global search and neural network training. Appl. Intell. 2018, 48, 445–464. [Google Scholar] [CrossRef]
  38. Zafar, M.H.; Khan, N.M.; Mansoor, M.; Mirza, A.F.; Moosavi, S.K.R.; Sanfilippo, F. Adaptive ML-based technique for renewable energy system power forecasting in hybrid PV-Wind farms power conversion systems. Energy Convers. Manag. 2022, 258, 115564. [Google Scholar] [CrossRef]
  39. Zhang, J.; Wang, J.S. Improved salp swarm algorithm based on levy flight and sine cosine operator. IEEE Access 2020, 8, 99740–99771. [Google Scholar] [CrossRef]
  40. Iacca, G.; Junior, V.C.D.S.; de Melo, V.V. An improved Jaya optimization algorithm with Lévy flight. Expert Syst. Appl. 2021, 165, 113902. [Google Scholar] [CrossRef]
  41. Dahou, A.; Abd Elaziz, M.; Chelloug, S.A.; Awadallah, M.A.; Al-Betar, M.A.; Al-qaness, M.A.; Forestiero, A. Intrusion Detection System for IoT Based on Deep Learning and Modified Reptile Search Algorithm. Comput. Intell. Neurosci. 2022, 2022, 6473507. [Google Scholar] [CrossRef]
  42. Xiong, J.; Peng, T.; Tao, Z.; Zhang, C.; Song, S.; Nazir, M.S. A dual-scale deep learning model based on ELM-BiLSTM and improved reptile search algorithm for wind power prediction. Energy 2023, 266, 126419. [Google Scholar] [CrossRef]
  43. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  44. Chechkin, A.; Metzler, R.; Klafter, J.; Gonchar, V.Y. Introduction to the Theory Lévy Flights. In Anomalous Transport: Foundations and Applications; Wiley-VCH: Weinheim, Germany, 2008. [Google Scholar]
  45. Yang, X.-S.; Deb, S. Multiobjective cuckoo search for design optimization. Comput. Oper. Res. 2013, 40, 1616–1624. [Google Scholar] [CrossRef]
  46. Xin, Y.; Yong, L.; Guangming, L. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef] [Green Version]
  47. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H. Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103330. [Google Scholar] [CrossRef]
  48. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995. [Google Scholar]
  49. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  50. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  51. Fouad, M.M.; El-Desouky, A.I.; Al-Hajj, R.; El-Kenawy, E.S.M. Dynamic Group-Based Cooperative Optimization Algorithm. IEEE Access 2020, 8, 148378–148403. [Google Scholar] [CrossRef]
  52. Zafar, M.H.; Khan, N.M.; Moosavi, S.K.R.; Mansoor, M.; Mirza, A.F.; Akhtar, N. Artificial Neural Network (ANN) trained by a Novel Arithmetic Optimization Algorithm (AOA) for Short Term Forecasting of Wind Power. In Proceedings of the International Conference on Intelligent Technologies and Applications (INTAP), Grimstad, Norway, 11–13 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 197–209. [Google Scholar]
  53. Liang, J.J.; Suganthan, P.N.; Chen, Q. Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization. Comput. Intell. Lab. Zhengzhou Univ. Zhengzhou China Nanyang Technol. Univ. Singap. Tech. Rep. 2013, 201212, 281–295. [Google Scholar]
  54. Mansoor, M.; Ling, Q.; Zafar, M.H. Short Term Wind Power Prediction using Feedforward Neural Network (FNN) trained by a Novel Sine-Cosine fused Chimp Optimization Algorithm (SChoA). In Proceedings of the 2022 5th International Conference on Energy Conservation and Efficiency (ICECE), Lahore, Pakistan, 16–17 March 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar]
  55. Available online: https://archive.ics.uci.edu/ml/index.php (accessed on 17 September 2022).
  56. Available online: https://www.kaggle.com/datasets (accessed on 30 September 2022).
  57. Al-Dahidi, S.; Ayadi, O.; Alrbai, M.; Adeeb, J. Ensemble approach of optimized artificial neural networks for solar photovoltaic power prediction. IEEE Access 2019, 7, 81741–81758. [Google Scholar] [CrossRef]
  58. Şahin, S.; Türkeş, M. Assessing wind energy potential of Turkey via vectoral map of prevailing wind and mean wind of Turkey. Theor. Appl. Climatol. 2020, 141, 1351–1366. [Google Scholar] [CrossRef]
  59. Available online: https://www.kaggle.com/datasets/berkerisen/wind-turbine-scada-dataset (accessed on 13 October 2022).
  60. Available online: https://www.kaggle.com/datasets/anikannal/solar-power-generation-data (accessed on 10 October 2022).
  61. Zafar, M.H.; Khan, U.A.; Khan, N.M. Hybrid Grey Wolf Optimizer Sine Cosine Algorithm based Maximum Power Point Tracking Control of PV Systems under Uniform Irradiance and Partial Shading Condition. In Proceedings of the 2021 4th International Conference on Energy Conservation and Efficiency (ICECE), Lahore, Pakistan, 16–17 March 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
  62. Zafar, M.H.; Khan, N.M.; Mirza, A.F.; Mansoor, M.; Akhtar, N.; Qadir, M.U.; Khan, N.A.; Moosavi, S.K.R. A novel meta-heuristic optimization algorithm based MPPT control technique for PV systems under complex partial shading condition. Sustain. Energy Technol. Assess. 2021, 47, 101367. [Google Scholar]
  63. Mansoor, M.; Mirza, A.F.; Duan, S.; Zhu, J.; Yin, B.; Ling, Q. Maximum energy harvesting of centralized thermoelectric power generation systems with non-uniform temperature distribution based on novel equilibrium optimizer. Energy Convers. Manag. 2021, 246, 114694. [Google Scholar] [CrossRef]
Figure 1. 2-Dimensional Levy random walk along X and Y axis.
Figure 1. 2-Dimensional Levy random walk along X and Y axis.
Applsci 13 00945 g001
Figure 2. Flow chart of IRSA.
Figure 2. Flow chart of IRSA.
Applsci 13 00945 g002
Figure 3. Qualitative results for unimodal, multimodal, and fixed-dimension functions.
Figure 3. Qualitative results for unimodal, multimodal, and fixed-dimension functions.
Applsci 13 00945 g003
Figure 4. Convergence curves for F1–F23.
Figure 4. Convergence curves for F1–F23.
Applsci 13 00945 g004
Figure 6. Radial Basis Function Neural Network.
Figure 6. Radial Basis Function Neural Network.
Applsci 13 00945 g006
Figure 7. Flow chart of IRSANN algorithm.
Figure 7. Flow chart of IRSANN algorithm.
Applsci 13 00945 g007
Figure 8. Convergence curves for classification.
Figure 8. Convergence curves for classification.
Applsci 13 00945 g008
Figure 9. Wind Prediction (Winter).
Figure 9. Wind Prediction (Winter).
Applsci 13 00945 g009
Figure 10. Wind Prediction (Summer).
Figure 10. Wind Prediction (Summer).
Applsci 13 00945 g010
Figure 11. Solar Power Prediction.
Figure 11. Solar Power Prediction.
Applsci 13 00945 g011
Table 1. Results for unimodal and multimodal functions for 50 dimensions, 500 iterations.
Table 1. Results for unimodal and multimodal functions for 50 dimensions, 500 iterations.
FunMeasureIRSARSABMOPSOGWOAOADGCO
F1Best006.030e-12932.42034.7922e-356.706e-2841.238e-21
Worst001.961e-12163.68548.0427e-331.2838e-1142.1218e-19
Average002.712e-12243.26081.4545e-331.2838e-1154.7446e-20
STD006.153e-1229.485892.4625e-334.0598e-1157.9075e-20
F2Best001.393e-672.358171.7406e-2003.7704e-14
Worst007.383e-655.326639.4593e-2004.68526e-13
Average001.841e-653.311474.5301e-2001.10333e-13
STD002.663e-650.897172.2708e-2001.29549e-13
F3Best004.0313e-925.5826e+034.5340e-040.02860.0025
Worst002.9026e-769.1971e+030.02010.583825.4054
Average001.1183e-767.8891e+030.00740.19075.7823
STD001.5330e-761.3842e+030.00740.227710.9998
F4Best007.3742e-585.436657.5397e-094.6289e-884.97610e-06
Worst002.5295e-5214.12794.4114e-080.052870.000247
Average004.1512e-538.3057021.8813e-080.0245605.9520e-05
STD008.765e-532.590271.274e-080.022227.3133e-05
F5Best45.470548.982448.94175.7675e+0346.195048.607046.1974
Worst47.788948.990348.99841.0132e+0447.843048.943947.8660
Average46.465648.988348.97218.1573e+0347.030048.786947.0569
STD0.65060.00320.01732.0349e+030.80860.14020.5917
F6Best1.5811.9711.39150.002.256.433.48
Worst2.6712.2512.44284.602.897.445.78
Average2.3712.2211.935214.762.56.904.67
STD0.250.08780.406747.020.270.340.66
F7Best7.6084e-062.6639e-051.5133e-040.13474.8839e-041.6821e-070.0014
Worst2.8963e-056.7811e-053.8440e-040.29850.00375.2418e-050.0154
Average8.7740e-059.0710e-054.1659e-040.21550.00222.3443e-050.0071
STD9.3116e-055.2933e-052.3023e-040.05920.00122.0923e-050.0055
F8Best−1.0646e+4−8.845e+3−5.358e+3−1.0646e+4−9.14e+3−8.0208e+03−1.0548e+04
Worst−6.8637e+3−4.57e+3−2.928e+03−6.8637e+3−6.13e+3−6.8298e+03−7.3571e+03
Average−8.7577e+3−6.04e+3−3.867e+03−8.7577e+3−7.32e+3−7.2473e+03−8.5385e+03
STD1.0697e+31.2009e+3863.44371.0697e+3902.38427.9226849.1905
F9Best000114.04575.6843e-1304.4338e-12
Worst000140.50945.4241201.4080e-09
Average000131.61733.039004.7419e-10
STD00015.21782.770508.0872e-10
F10Best8.8817e-168.8817e-168.8817e-162.989333.9968e-148.8817e-162.0863e-12
Worst8.8817e-168.8817e-168.8817e-164.255575.0626e-148.8817e-1620.2605
Average8.8817e-168.8817e-168.881e-163.627554.3520e-148.8817e-164.04392
STD0000.387523.3495e-1508.52536
F11Best0001.1849600.0095430
Worst0001.754590.0146980.4282510.019779
Average0001.394480.003680.1592730.001977
STD0000.1778050.0060910.1388200.006254
F12Best0.08681.13490.77560.15670.03731.00890.1162
Worst0.15131.41601.19590.26350.05831.02860.2057
Average0.15341.36801.16880.21820.04591.03950.2604
STD0.03730.12460.21210.05520.01100.03160.1348
F13Best4.5760e-081.83590.18000.16240.89894.79012.8443
Worst2.94443.83010.25210.39251.68954.97373.4369
Average1.97113.07600.20710.26571.17834.91833.0976
STD1.69910.78820.02930.08350.30080.08110.2333
Table 2. Results for fixed-dimension functions considering 500 iterations.
Table 2. Results for fixed-dimension functions considering 500 iterations.
FunMeasureIRSARSABMOPSOGWOAOADGCO
F14Best0.99802.98212.04811.9130.99807.87400.9980
Worst2.98343.000611.71872.98110.763212.67052.9821
Average2.13132.99016.46652.5235.102510.75792.1964
STD0.83250.01393.00670.73414.90921.79570.6274
F15Best0.000300.000680.000330.000380.000300.000360.00035
Worst0.015500.008960.006540.001420.020360.057580.001224
Average0.004060.001800.0017100.0008130.0044170.0090300.000709
STD0.005480.002530.002230.000430.0084080.0182800.000237
F16Best−1.03162−1.03159−1.03162−1.03162−1.03162−1.03162−1.03162
Worst−1.03162−0.99552−0.91896−1.03162−1.03162−1.03162−1.03162
Average−1.03162−1.02159−1.01681−1.03162−1.03162−1.03162−1.03162
STD2.8338e-080.011500.034935.4218e-081.6501e-084.1202e-093.1308e-09
F17Best0.397880.407420.397880.397880.397880.397880.39788
Worst0.3978971.270990.426960.397880.397880.397880.39788
Average0.3978910.643710.403300.397880.397880.397880.39788
STD3.8146e-060.257820.010857.2909e-074.2567e-074.3534e-083.6445e-07
F18Best3.000003.000002.999993.000003.000003.000003.000000
Worst3.0000013.3951142.10543.000113.0000295.18853.000000
Average3.0000003.0455510.11883.000023.0000123.12483.00000
STD5.1797e-070.1230313.02763.477e-057.920e-0628.71896.458e-10
F19Best−3.86277−3.85920−3.85436−3.86276−3.86277−3.86277−3.86278
Worst−3.85374−3.57053−1.28898−3.86201−3.85315−1.00081−3.86277
Average−3.86101−3.75028−3.44530−3.86263−3.86034−1.85933−3.86277
STD0.003670.096640.816620.000240.004031.382353.6601e-06
F20Best−3.32086−2.89690−2.45549−3.32121−3.20304−3.32194−3.32189
Worst−3.20101−0.17365−0.54213−3.19851−3.08259−3.15098−3.13200
Average−3.27287−1.54187−1.24769−3.22497−3.17957−3.27776−3.21025
STD0.061161.038500.757910.050610.042410.0719640.06482
F21Best−10.1206−5.0552−4.9619−10.2512−10.0712−10.1531−10.1494
Worst−5.0552−5.0552−1.8717−5.0552−5.0552−5.0552−5.0549
Average−9.0840−5.0552−3.8482−6.0943−6.0745−6.0747−7.0926
STD2.25733.8458e-071.31342.79222.27932.27982.7900
F22Best−10.3586−5.08767−5.05414−10.4029−10.4024−10.4029−10.3960
Worst−3.72361−5.08766−2.32919−3.72365−5.08765−3.72429−3.72380
Average−8.44500−5.08766−3.83632−5.87780−9.33904−7.60893−6.66468
STD2.839801.0352e-060.984292.4490692.240672.973003.25225
F23Best−10.52251−5.128480−4.99447−10.53637−10.5361−10.5363−10.5309
Worst−3.83461−5.12847−2.49710−3.83472−5.12845−2.29603−3.83496
Average−8.48605−5.12847−3.77437−6.62130−8.91314−7.34602−6.50874
STD2.947322.3686e-060.761352.730972.611673.321723.45155
Table 3. Results for unimodal and multimodal functions for 30, 100, and 500 dimensions, 300 iterations.
Table 3. Results for unimodal and multimodal functions for 30, 100, and 500 dimensions, 300 iterations.
FunDimIRSARSABMOPSOGWOAOADGCO
F130002.9729e-75100.4292745.806e-201.74296e-765.93134e-10
100007.502e-462951.75710.00126680.02349203347.4212010
500005.948e-4191703.5649109.123810.5862142113.8580
F230001.0065e-394.599766027.467e-1102.10414e-07
100004.6351e-2553.73275300.01348804.19883e-600.268447521
500003.3724e-23487.7676010.643670.001612.519646
F330001.1399e-521321.790870.00011016.9638e-1047.97466788
100003.164e-2850712.140714820.471.53198317180032.5400
500006.6866e-191303817.56513880.0536.035904104.83
F430005.6171e-328.652320854.478e-053.51206e-130.04110228
100003.1527e-2031.4806995.74023990.10075662282.157639
500005.7108e-1843.13428465.761520.179916070.09866121
F53025.7242428.9917128.9957621300.1205727.13682228.5927014826.21687570
10096.898398.990104898.9698060226496.08998.34364898.9001267297.5081886
500498.3830499.6902499.078127910234.59445.0271499.13411564392.7
F6300.5766337.03461806.97472880111.6685050.75355953.031440181.25053679
10011.0530924.751301824.50998042069.852829.492888818.407287313.8718994
500111.5654124.751626123.35545164419.3854268.90391117.2773220145.94373
F7309.50e-051.9397e-050.000502760.059704240.00191183.78351e-050.00390436
1003.97e-052.2064e-050.00043191.20965140.00974395.96465e-060.0300833
5008.21e-050.000461350.00080821235.6653420.70739790.0001199790.292147
F830−5214.04−3431.1934−2834.3518−6549.69516−6277.543−4918.30201−5573.32119
100−64809.8−36018.762−18000.185−56587.461−16611.13−47550.1316−34933.7702
500−80242.2−44205.61−12898.155−36269.793−58117.96−22872.6112−24663.3214
F930000103.8123461.905441705.86970e-09
100000587.20549258.233470020.62413137
5000004309.97279902.541040466.7469966
F10308.881e-168.8817e-168.8817e-165.25189041.724e-108.88178e-161.63626e-06
1008.881e-168.881e-168.8817e-167.82620775.792e-058.88178e-1620.523618
5008.881e-168.881e-164.440e-1512.9276652.1406390.008385720.881255
F11300001.84850670.02988390.238550461.26669e-10
10000028.38843250.0349668897.8474510.04874511
500000867.603043.110043213134.6129162.849911
F12300.0481121.597003650.934940390.016982080.07122040.814382770.045133459
1000.3746421.289941631.195299760.647704980.11793541.209621840.389415151
5000.97431.20751.19802.01900.59991.18001.2411
F13301.2804961.9706352.67059390.159169800.10188682.97859901.600784929
1007.8764079.8785859.9938484.574334495.54996729.9636234918.44409492
50049.9968249.9973449.9972582.31846.35149.99854.177
Table 4. T ˜   s e c   calculation for various metaheuristic algorithms.
Table 4. T ˜   s e c   calculation for various metaheuristic algorithms.
FunT IRSAT RSAT FDOT BMOT AOAT PSOT GWOT DGCOT FPAT DFA
CEC01440.04498.649569.9512.80468.96472.38472.49472.884702.73901.60
CEC0251.562158.97303.7726.6039.789910.09711.74614.144420.60858.17
CEC0366.843192.89491.6935.67217.95018.15819.01122.222481.661018.4
CEC0433.76199.078514.3126.6059.106010.07511.03512.389266.56627.08
CEC0534.37399.519744.3127.6679.483210.54511.18012.966267.42491.911
CEC06175.14240.275176.0189.23165.30169.64168.52170.18423.23641.59
CEC0734.83199.440384.8324.7689.775910.60411.56612.993266.66483.13
CEC0834.73399.406369.5625.4289.779710.68311.52212.848267.13481.31
CEC0933.67098.69383.2723.5518.31109.136210.29811.728266.52492.78
Table 5. T   s e c calculation for various metaheuristic algorithms.
Table 5. T   s e c calculation for various metaheuristic algorithms.
FunT1 T ˜ IRSA T ˜ RSA T ˜ FDO T ˜ BMO T ˜ AOA T ˜ PSO T ˜ GWO T ˜ DGCO T ˜ FPA T ˜ DFA
CEC010.010543.01748.736934.1050.11845.83946.17346.18446.22268.65588.065
CEC020.00425.10115.58429.7172.6651.0241.0541.2151.44941.12083.826
CEC030.00446.592418.89548.0583.55011.82051.840771.92402.237447.07999.466
CEC040.00313.36369.738650.2662.66520.95721.051891.14561.277726.08561.272
CEC050.00353.42349.781672.7142.76880.99401.097741.15971.334026.16948.079
CEC060.006817.16323.519505.2518.53816.20216.625716.51716.67941.37662.688
CEC070.00443.46819.773937.6282.48591.02261.103491.19741.336726.09547.222
CEC080.00453.45859.770636.1382.55031.02301.11121.19311.322526.14147.044
CEC090.00363.35479.701537.4762.36710.87960.96021.07361.213226.08148.164
Table 6. Description of used dataset.
Table 6. Description of used dataset.
Data Set# Classes# FeaturesNo. of training samplesNo. of testing samples
Iris3410050
Heart213203100
Stress-Lysis331340661
Banknote-authentication24919453
Blood-transfusion24501247
Cryotherapy266030
Diabetes28514254
Table 7. Training accuracy.
Table 7. Training accuracy.
DatasetTraining Accuracy (%)
PSONNBMOANNAOANNRSANNIRSANN
Iris97949388.0297
Heart72.6072666.996759.7359742.2442288
Stress-Lysis91.7541295.802170.1649259.3703199.42171
Banknote-authentication94.4262397.923585.6830684.4808799.45355
Blood-transfusion79.9599278.1563177.3547175.150377.55511
Cryotherapy909271.6666781.6666796.66667
Diabetes74.804697572.0703166.7968878.10156
Haberman757578.4313779.4117678.43137
Table 8. Testing accuracy.
Table 8. Testing accuracy.
DatasetTesting Accuracy (%)
PSONNBMOANNAOANNRSANNIRSANN
Iris9498.66929198.23
Heart70.957175.9075958.0858146.5346585
Stress-Lysis91.9040597.001570.0149957.1214499.7121
Banknote-authentication93.4354596.4989183.369885.3391799.56236
Blood-transfusion77.5100477.9116574.698879.1164780.72289
Cryotherapy93.3333386.666678076.6666793.33333
Diabetes67.5781375.7812576.171886769.9218877.26563
Haberman67.4705976.4705976.6470670.0980480.54902
Table 9. Cost evaluation of classification dataset.
Table 9. Cost evaluation of classification dataset.
DatasetCost Function Comparison
PSONNBMOANNAOANNRSANNIRSANN
IrisBest0.0787880.0820340.3420850.1434030.072572
Avg0.1379490.0871610.7426860.3270730.115797
Std0.0836670.005690.4847780.1599490.108159
HeartBest0.569520.5459250.9954770.8080330.472722
Avg0.6403220.5816271.3725870.8538870.517036
Std0.0620890.0309380.531510.0760740.038613
Stress-LysisBest0.1568560.1363330.3216340.2485560.0911631
Avg0.1592720.1425920.3532290.6274160.268016
Std0.0034180.0070510.0275620.3284210.055127
Banknote-authenticationBest0.3187510.1542560.5566540.5036230.085457
Avg0.3312620.1707830.7643280.8363280.145555
Std0.0176930.0168990.2071490.2884240.054529
Blood-transfusionBest0.8235670.8824030.9606140.9582030.887161
Avg0.8302330.8885821.2604260.9763220.895859
Std0.0094260.0084370.2596950.0156920.009459
CryotherapyBest0.3571010.7564370.2348320.5420860.28827
Avg0.3623290.8889820.2929110.7642910.305118
Std0.0073940.1291270.0509990.1925320.127465
DiabetesBest1.0486150.7259240.7304820.9472660.646058
Avg1.0763850.7437590.75620.9821090.724891
Std0.0428240.0197330.0363720.0409680.07128
HabermanBest0.8928950.9064560.9772510.9946210.881229
Avg0.9364450.9317741.2696010.9985910.925252
Std0.0615890.007190.140370.0034690.038883
Table 10. Statistical measures for classification.
Table 10. Statistical measures for classification.
Data SetTechniqueTrainingTesting
PrecisionRecallF1 ScorePrecisionRecallF1 Score
IrisIRSA0.9791420.9799230.9795330.9622220.9671780.969693
RSA0.8918660.8753160.8835130.8326020.7962960.814045
BMO0.9443560.9439150.9441360.9777780.9777780.977778
AOA0.6809180.6476190.6638510.6959260.6555560.675138
PSO0.9729730.9696970.9713320.9649120.9583330.961612
HeartIRSA0.8884280.8845870.885030.8487260.8512510.854971
RSA0.4621160.4620220.4620690.603970.6037250.603848
BMO0.738580.6369170.6839920.7406290.6602050.698108
AOA0.4809760.4814980.4812370.4828770.4862420.484554
PSO0.6576880.6515640.6546120.6020020.5779570.589735
Stress-LysisIRSA0.8629530.8695830.8662550.8315950.8425650.837044
RSA0.5512580.5154120.5142340.554350.5088440.61734
BMO0.9596560.9514360.9555280.9756690.963730.969663
AOA0.7684470.6748130.7185930.7848790.6734370.7249
PSO0.8793580.8953910.8873020.8892970.9057630.897455
Banknote-authenticationIRSA0.9957920.9945630.9952250.9959020.9953490.995625
RSA0.8444130.8386690.8415310.852880.8521470.852513
BMO0.977610.981150.9793770.9637060.9672570.965478
AOA0.572960.576420.6947460.8095330.8127950.811161
PSO0.9429640.9441030.9435330.9333750.9346130.933994
Blood-transfusionIRSA0.7297530.5953560.6557390.6911210.5705590.62508
RSA0.6272820.5080970.5614340.7304530.5319090.615567
BMO0.6839640.5790040.6271220.6526370.5524940.598405
AOA0.7618690.5117410.6122430.6244940.5052480.558578
PSO0.7395830.6217580.6755720.6863590.5648150.619683
CryotherapyIRSA0.9507740.9436110.9571480.950.9166670.933036
RSA0.8184490.8065610.8124620.7666670.7678570.767261
BMO0.9158820.9282760.9120640.8194440.8301440.824759
AOA0.6991770.6902360.6946780.6388890.6333330.636099
PSO0.8973210.9027150.900010.9444440.9285710.936441
DiabetesIRSA0.7721650.74830.7600450.6832960.6769770.690122
RSA0.6260050.6017430.6136340.662660.6104540.635487
BMO0.7249190.7263920.7256550.6998080.7221590.710808
AOA0.6514550.6149560.6326790.6702970.6430820.656407
PSO0.7049080.6768130.7346650.7653620.6994960.730948
HabermanIRSA0.7145550.5617290.6389920.6770830.541180.623362
RSA0.5199120.5013890.5104770.6500000.5176370.576316
BMO0.7003760.565040.6254710.6329790.5506070.588927
AOA0.6559140.5741480.6123130.6623380.583160.620232
PSO0.6749190.6051860.6381530.6628370.582490.620072
Table 11. Statistical evaluation for wind and solar prediction.
Table 11. Statistical evaluation for wind and solar prediction.
Data SetTechniqueTrainingTestingCost
RERMSER2RERMSER2
Wind power prediction(winter)PSO0.015711.83030.98350.088655.67470.90810.0121
RSA0.017016.60710.96570.106561.27480.88900.0326
BMO0.095932.52200.82780.2318107.93450.52590.0444
AOA0.035626.95340.89610.113559.14650.87490.0373
IRSA0.00498.33960.99180.064232.78800.96650.0105
Wind power prediction(summer)PSO0.00492.89180.99070.07729.59370.97800.0193
RSA0.00554.48550.97780.604238.72330.64230.0251
BMO0.08085.85820.94600.298322.37060.82910.0428
AOA0.03394.08710.97790.468830.62820.75650.0208
IRSA0.00152.84160.99120.06329.34010.98010.0184
PV power predictionPSO0.023594.90000.96490.0762298.79300.94200.0535
RSA0.0144119.68470.96260.0440223.87010.96560.0410
BMO0.0359148.50610.91500.1148232.24800.95450.0827
AOA0.0429183.50050.89870.1111383.48900.89130.0963
IRSA0.0146900.97610.1285260.85310.96110.0234
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, M.K.; Zafar, M.H.; Rashid, S.; Mansoor, M.; Moosavi, S.K.R.; Sanfilippo, F. Improved Reptile Search Optimization Algorithm: Application on Regression and Classification Problems. Appl. Sci. 2023, 13, 945. https://doi.org/10.3390/app13020945

AMA Style

Khan MK, Zafar MH, Rashid S, Mansoor M, Moosavi SKR, Sanfilippo F. Improved Reptile Search Optimization Algorithm: Application on Regression and Classification Problems. Applied Sciences. 2023; 13(2):945. https://doi.org/10.3390/app13020945

Chicago/Turabian Style

Khan, Muhammad Kamran, Muhammad Hamza Zafar, Saad Rashid, Majad Mansoor, Syed Kumayl Raza Moosavi, and Filippo Sanfilippo. 2023. "Improved Reptile Search Optimization Algorithm: Application on Regression and Classification Problems" Applied Sciences 13, no. 2: 945. https://doi.org/10.3390/app13020945

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop