Next Article in Journal
Variational Quantum Circuits to Prepare Low Energy Symmetry States
Previous Article in Journal
Cerebral Arterial Asymmetries in the Neonate: Insight into the Pathogenesis of Stroke
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Reptile Search Algorithm and Remora Optimization Algorithm for Optimization Tasks and Data Clustering

by
Khaled H. Almotairi
1,* and
Laith Abualigah
2,3,*
1
Computer Engineering Department, Umm Al-Qura University, Makkah 24382, Saudi Arabia
2
Faculty of Computer Sciences and Informatics, Amman Arab University, Amman 11953, Jordan
3
School of Computer Sciences, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(3), 458; https://doi.org/10.3390/sym14030458
Submission received: 21 January 2022 / Revised: 6 February 2022 / Accepted: 18 February 2022 / Published: 24 February 2022

Abstract

:
Data clustering is a complex data mining problem that clusters a massive amount of data objects into a predefined number of clusters; in other words, it finds symmetric and asymmetric objects. Various optimization methods have been used to solve different machine learning problems. They usually suffer from local optimal problems and unbalance between the search mechanisms. This paper proposes a novel hybrid optimization method for solving various optimization problems. The proposed method is called HRSA, which combines the original Reptile Search Algorithm (RSA) and Remora Optimization Algorithm (ROA) and handles these mechanisms’ search processes by a novel transition method. The proposed HRSA method aims to avoid the main weaknesses raised by the original methods and find better solutions. The proposed HRSA is tested on solving various complicated optimization problems—twenty-three benchmark test functions and eight data clustering problems. The obtained results illustrate that the proposed HRSA method performs significantly better than the original and comparative state-of-the-art methods. The proposed method overwhelmed all the comparative methods according to the mathematical problems. It obtained promising results in solving the clustering problems. Thus, HRSA has a remarkable efficacy when employed for various clustering problems.

1. Introduction

Unsupervised learning methods are instrumental in machine learning because they may explore data without any prior knowledge of them, i.e., there are no labels linked with the data [1]. These algorithms try to represent the data’s underlying mechanism or pattern, which may be helpful for things such as decision making and forecasting future inputs. Clustering and feature methods are classic examples of unsupervised algorithms [2,3].
Clustering is an important unsupervised method that uses a number of data objects or items to discover homogenous groupings [4]. Clustering is used to split data items into groups so that objects in the same cluster are similar to one another and different from objects in other clusters. Algorithms for clustering have been utilized in a wide range of applications. They are used in biology to extract interesting patterns from gene expression [5,6]. Additionally, they are used to divide sensor networks into groups in wireless sensor networks and in information retrieval for grouping text content and generating thematic hierarchies. For a product search task, clustering can be utilized to group people or objects [7].
The data and available information are the root of the fast development model of today [8], and are extensively employed in various information technology applications such as manufacturing, marketing, and commerce. Because there are so many data, data mining is one of the most critical methods for extracting meaningful information. Data mining is a revolutionary idea utilized to tailor knowledge across numerous industries, such as medical records examination, client transaction analysis, and market surveying programs [9]. The clustering and classification algorithms are the two most essential requirements for extracting information from data mining. As a result, data clustering has become a valuable and demanding process for generating clusters where data are clustered together. Data clustering is achieved by using two clustering mechanisms: hierarchical and partitional clustering [10,11]. The data items are grouped hierarchically, either in the shape of a tree or a cluster analysis, in a clustering algorithm. The partitional clustering technique, on the other hand, produces non-overlapping groups. This method has a propensity to cope with noise and outliers, manageability, integration, usability, and other issues in real-world applications. K-means clustering, Fuzzy c-means method, density accurate and robust, and so on are some of the most frequently used classification techniques [12,13].
Several related works have been conducted based on using optimization techniques to solve the clustering problems, which is worth mentioning [14,15]. They motivated us to propose a new efficient method to deal with various data clustering issues.
A novel Class Topper Optimization (CTO) method is presented in [16]. The cognitive intelligence of pupils in a class inspired the optimization method. A population-based search technique is used. The answer narrows towards the optimal solution in this technique. This may lead to a worldwide optimal solution. A clustering challenge is explored to validate the algorithm’s performance. For real-time testing, five common data sets are examined. A comparison of the performance of the proposed approach to many well-known existing optimization techniques reveals that it outperforms them all.
A novel clustering approach is presented based on the evolution particle swarm optimization (EPSO) algorithm [17]. The suggested approach is based on the development of swarm generations. At first, the particles are equally dispersed in the input data set, and a new swarm population emerges after a defined number of repetitions. This paper covers the novel technique and its initial implementation and testing using real clustering benchmark data. The findings demonstrate that the technique creates compact clusters and is effective.
A modernized firefly technique is combined with the well-known PSO approach to handle automated data clustering difficulties [18]. The proposed fusion technique is designed against four standard metaheuristic algorithms from the literature using twelve common datasets and the two moons dataset to see how well it performs. The thorough computational tests and results analysis reveals that the suggested approach outperforms the standard published methods.
The ABCWOA method, which uses Random Memory (RM) and Elite Memory (EM) to overcome investigation problems and late convergence in ABC, is presented in [19]. The search phase for the bait in the whale optimization algorithm (WOA) is employed by RM in the ABCWOA method, and EM is also used to boost convergence. The ABCWOA algorithm outperforms other metaheuristic algorithms, according to the results.
This study suggests a novel heuristic strategy based on the big bang–big crunch algorithm for clustering issues [20]. In comparison to previous heuristic strategies, the suggested method not only utilizes the use of the heuristic nature to reduce standard clustering algorithms such as k-means, but it also benefits from the memory-based scheme. Furthermore, the proposed algorithm’s system is assessed using various benchmark functions, including well datasets. The experimental findings reveal that the suggested technique outperforms similar algorithms substantially.
This study proposes K–NM–PSO, a hybrid approach that combines the K-means algorithm, Nelder–Mead simplex search, and PSO [21]. The K–NM–PSO method, like the K-means algorithm, looks for cluster centers in any data set. However, it can also discover global optima effectively and rapidly. The results suggest that K–NM–PSO is reliable and appropriate for data clustering. Other optimization methods can be used to solve the data clustering problems. Additionally, many optimization methods have been used to solve various problems [22]. It is also worth mentioning that the optimization methods proved their ability in solving many other problems such as solve complex engineering optimization problems [23], reactive power planning problems [24], parameter estimation problems [25], voltage-constrained power problems [26], and others.
The Reptile Search Algorithm (RSA) is a recent and novel optimizer that was proposed by Abualigah et al. in [27]. This method mimics the behavior of crocodiles on two principal activities—encircling and hunting—each of which is performed in two main strategies: (1) encircling: high walking or belly walking; (2) hunting: hunting coordination or hunting cooperation. The Remora Optimization Algorithm (ROA) is a recent and novel optimizer that has been proposed by Jia et al. in [28], a new bionics-based optimization method. The main inspiration of the ROA is Remora’s behavior.
Various optimization methods have been employed in the literature to solve different machine learning problems, especially clustering problems. The proposed method’s primary motivation is that the recent studies’ that used optimization methods for solving similar problems yielded performances that were not good enough, and a new, improved method can find new optimal solutions. Optimization methods usually suffer from optimal local problems and unbalance between the search mechanisms (i.e., exploration and exploitation). Moreover, we selected the most reputed methods to incorporate them and created a new hybrid approach to yield new and better results. This paper proposed a novel hybrid optimization search method for solving challenging optimization problems. The proposed method is called HRSA, which combines the original Reptile Search Algorithm (RSA) and Remora Optimization Algorithm (ROA) and handles these mechanisms’ search processes by a novel transition method. The proposed HRSA method solves the main weaknesses raised by the original methods and finds better solutions. The proposed HRSA is tested on two classes of optimization problems: twenty-three benchmark test functions and eight real-world data clustering problems. The results illustrate that the proposed HRSA method performs significantly better than the original and comparative state-of-the-art methods. The proposed method overwhelmed all the comparative methods according to the mathematical problems. It obtained promising results in solving the clustering problems. Thus, HRSA has a remarkable ability to be employed for various clustering problems.
The major contributions in this paper are covered in the following points.
  • A novel hybrid optimization method is proposed using the original Reptile Search Algorithm (RSA) and Remora Optimization Algorithm (ROA).
  • A new transition method is proposed to handle the mechanisms’ search processes and help the proposed method in enabling the suitable search operator during the optimization process.
  • The performance of the proposed HRSA method is tested on several benchmark functions to show the mai.
  • A real-world problem, data clustering, is used to prove further the proposed HRSA’s ability to deal with complicated problems, finding symmetric and asymmetric objects.
  • The results showed the superiority of the proposed method in solving the given problems compared to other various state-of-the-art methods.
In the rest of the paper, Section 2 presents the background of the used methods and the procedure of the proposed method. Section 3 shows the experiments, results, and discussion. Section 4 gives the research conclusions and potential future work.

2. Background

This section presents the original methods used in the proposed method. Additionally, the main procedure of the proposed hybrid Reptile Search Algorithm (RSA) and Remora Optimization Algorithm (ROA) with a novel transition mechanism (HRSA) is presented.

2.1. Reptile Search Algorithm (RSA)

This section presents the original Reptile Search Algorithm (RSA) and its procedure. The basic Reptile Search Algorithm (RSA) is described in its exploration (global search) and exploitation (local search) stages, which were inspired by the encircling mechanics, hunting processes, and social behavior of crocodiles in real life [27].

2.1.1. Encircling Phase (Exploration)

The exploratory behavior (encircling) of the RSA is introduced in this section. Crocodiles engage in two movements when encircling: high walking and belly walking, according to their encircling behavior [29].
The RSA switches between exploration and exploitation search phases based on four scenarios: splitting the number of iterations into four parts; dividing the total number of iterations into four parts. The RSA exploration mechanisms investigate the search regions and approach to discover a better answer based on two major search techniques.
One condition must be met throughout this phase of the search. The high walking search method is performed according to t T 4 , and the belly walking search method is performed according to t ≤ 2 T 4 and t > T 4 . The position-updating process is presented in Equation (1).
x ( i , j ) ( t + 1 ) = B e s t j ( t ) × η ( i , j ) ( t ) × β R ( i , j ) ( t ) × r a n d , t T 4 B e s t j ( t ) × x ( r 1 , j ) × E S ( t ) × r a n d , t 2 T 4 a n d t > T 4
where B e s t j ( t ) is the best-obtained solution, r a n d is a random number, t is the current iteration, and T is the maximum iterations. η ( i , j ) is the hunting parameter determined by Equation (2). β is a parameter fixed to 0.1. The reduce function ( R ( i , j ) ) is determined by Equation (3). r 1 r 4 are random numbers, x ( r 1 , j ) is a random position, and N is the used solutions. Evolutionary Sense ( E S ( t ) ) is a probability parameter determined by Equation (4).
η ( i , j ) = B e s t j ( t ) × P ( i , j ) ,
R ( i , j ) = B e s t j ( t ) x ( r 2 , j ) B e s t j ( t ) + ϵ ,
E S ( t ) = 2 × r 3 × 1 1 T ,
where ϵ a small value. P ( i , j ) is a difference parameter determined by Equation (5).
P ( i , j ) = α + x ( i , j ) M ( x i ) B e s t j ( t ) × ( U B ( j ) L B ( j ) ) + ϵ ,
where M( x i ) represents the average positions determined by Equation (6). U B ( j ) and L B ( j ) are the upper and lower boundaries. α is a parameter fixed to 0.1.
M ( x i ) = 1 n j = 1 n x ( i , j ) ,

2.1.2. Hunting Phase (Exploitation)

The exploitative behavior of RSA is discussed in this section. Crocodiles use two hunting techniques: hunting coordination and hunting collaboration, according to their hunting behavior.
The searching in this phase (hunting coordination) is executed and determined according to t ≤ 3 T 4 and t > 2 T 4 ; otherwise, the hunting cooperation is executed according to tT and t > 3 T 4 . The position-updating processes are presented in Equation (7):
x ( i , j ) ( t + 1 ) = B e s t j ( t ) × P ( i , j ) ( t ) × r a n d , t 3 T 4 a n d t > 2 T 4 B e s t j ( t ) η ( i , j ) ( t ) × ϵ R ( i , j ) ( t ) × r a n d , t T a n d t > 3 T 4
where B e s t j ( t ) is the best obtained solution, and η ( i , j ) is the hunting parameter determined by Equation (2). P ( i , j ) is a difference parameter determined by Equation (5). η ( i , j ) is the hunting parameter determined by Equation (2). R ( i , j ) is determined by Equation (3).

2.2. Remora Optimization Algorithm (ROA)

This section presents the original Remora Optimization Algorithm (ROA) and its procedure, which is as follows [28].

2.2.1. Free Travel (Exploration)

SFO Strategy

The formulation for this algorithm’s location update was modeled based on the algorithm’s elite notion, yielded by Equation (8).
R i t + 1 = R b e s t t r a n d × ( R b e s t t R r a n d t 2 ) R r a n d t
where R r a n d t is a random location.

Experience Attack

To evaluate whether or not it is required to replace the host, the tuyu must regularly take modest steps around the host, analogous to the development of knowledge. The formula for modeling the principles as mentioned above is as follows:
R a t t = R i t ( R i t R p r e ) × r a n d n
where R p r e is the position of the previous iteration, and R a t t is a tentative step.
The evaluation of the fitness function of the present solution f( R i t ) and the attempted solution f( R a t t ) is described as the decision of this step. As an example, when addressing the minimal problem, if the fitness function value produced by the proposed solution is less than the present solution,
f ( R i t ) > f ( R a t t )
Remora chooses a different mechanism for local optima, as shown in the following section. It returns to host choosing if the fitness function value of the attempted solution is greater than that of the existing solution.
f ( R i t ) < f ( R a t t )

2.2.2. Eat Thoughtfully (Exploitation)

WOA Strategy

The location update formula of Remora connected to the whale was recovered using the original WOA technique, as seen in the Equations below:
R i + 1 = D × e α × c o s ( 2 π α ) + R i
α = r a n d × ( a 1 ) + 1
a = ( 1 + t T )
D = | R b e s t R i |
Their locations might be considered the same when a Remora is on a whale in the broader solution space. D is the space between both the hunter and the prey, α is a random value in the range [−1,1], and a is a number that decreases exponentially between [−2, −1].

Host Feeding

The exploitation procedure is further subdivided into host feeding. At this point, the optimal solution can be condensed to the host’s location area. Incremental steps can be thought of as moving on or around the host, which can be mathematically characterized as:
R i t = R i t + A
A = B × ( R i t C × R b e s t )
B = 2 × V × r a n d V
V = 2 × ( 1 t T )
A was employed here to represent a tiny movement connected to the size space of the host and remora. A Remora factor (C) was utilized to limit the position of Remora to distinguish between the host and Remora. If the size of the host is 1, then the volume of the Remora is about a percent of the host’s volume.

2.3. The Proposed HRSA Method

This section presents the main procedure of the proposed hybrid Reptile Search Algorithm (RSA) and Remora Optimization Algorithm (ROA) with a novel transition mechanism (HRSA).
The suggested HRSA is used to solve various issues using two major search strategies and a new mean transition mechanism. The traditional RSA is the principal search technique, which has strong global search capabilities but occasionally suffers from local search problems, premature convergence, and disequilibrium between global and local search methods. As a result, the second search strategy, the ROA, avoids the local search issue and premature convergence. The strategy can improve the RSA’s searchability by generating new local solutions based on the best solutions currently available. Furthermore, a new mean transition mechanism is provided to govern the execution of the search methods (i.e., RSA and ROA) in the proposed HRSA to tackle the disequilibrium problem between the global and local search techniques. As a result, the search space may be efficiently expanded by incorporating new approaches from other places. These solutions inspire the proposed HRSA to use more robust solutions to obtain better results.

2.3.1. Initialization Phase

The optimization process in RSA begins with a set of candidate solutions (X) created stochastically, as indicated in Equation (20). The best obtained solution is deemed nearly optimal in each iteration.
X = x 1 , 1 x 1 , j x 1 , n 1 x 1 , n x 2 , 1 x 2 , j x 2 , n x i , j x N 1 , 1 x N 1 , j x N 1 , n x N , 1 x N , j x N , n 1 x N , n
where X is a collection of the solutions that are created by using Equation (21), x i , j is the j t h position of the i t h solution, N is the number of solutions, and n is the dimension size.
x i j = r a n d × ( U B L B ) + L B , j = 1 , 2 , . . . , n
where r a n d is a random value and L B and U B denote the lower and upper bound, respectively.

2.3.2. The Proposed Mean Transition Mechanism (MTM)

The mean transition mechanism (MTM) is devised and given in Algorithm 1 in this section. This technique is used to control the search process and the transition between the RSA and the MT. The transition from one search process to the next is quite delicate. It necessitates an effective way to change the update procedures across the various approaches. The suggested MTM’s fundamental concept is to regulate the search techniques when the fitness function does not increase after five iterations (I). When no gains can be made by experimentation, the number of iterations changes.
Algorithm 1 The proposed Mean Transition Mechanism (MTM).
1:
Initialize the TM parameter value (TM = 0).
2:
sumFF = 0;
3:
for (t = 1 to T) do
4:
   sumFF = sumFF + currentFF
5:
   C = C + 1;
6:
   if (currentFF ≠ sumFF) then
7:
     if (C > I) then
8:
        TM = flip(TM);
9:
        sumFF = 0;
10:
        C = 0;
11:
     end if
12:
   end if
13:
end for
T M is a binary variable utilized to adjust the search process between the RSA and the MT, s u m F F is a variable used to calculate the mean fitness function values, c u r r e n t F F is the current fitness function value, C is a counter, I is the number of iterations to change when no improvements and it can be adjusted by experiment, and f l i p is a function to flip the TM value from 0 to 1 or vice versa, as described in Algorithm 1.

2.3.3. The Detailed Process of the HRSA

This section goes through the proposed technique in detail. The main purpose of the proposed strategy is to obtain better results than those yielded by the existing approaches. We also want to avoid the defects presented by the original processes, such as local search issues, convergence constraints, and search balance concerns.
To conclude, the proposed HRSA starts with the creation of a random set of solutions. During the renewal process, the HRSA’s search criteria explore possible placements of the current best solution. Each answer progresses to the next stage of the procedure. According to Figure 1, the suggested HRSA uses the Reptile Search Algorithm (RSA) and the Remora Optimization Algorithm (ROA) approach. Each iteration will update and improve the potential solutions using one of these search methodologies.
The suggested HRSA’s search techniques are divided into two categories, as shown Figure 1: RSA and ROA. Following this, the RSA’s search techniques are divided into global and local search methods. There are two search techniques for each method: (1) at the global level, with high and walking tactics; and (2) hunting coordination and cooperation at the local level. Candidate solutions try to figure out what is going on in the search space t T 2 and seek to discover the near-optimal solution if t > T 2 . For the first section, if T M == 0, the search process of the RSA will be performed; otherwise, the search process of the ROA will be conducted. In the exploration step of the RSA, the first search strategy from the global search methods is performed when t T 4 , and the second strategy from the global search processes are accomplished when t ≤ 2 T 4 and t > T 4 . In the exploitation of the conventional RSA, the first search approach from the local methods is carried out when t ≤ 3 T 4 and t > 2 T 4 ; otherwise, the second search approach from the local methods is executed, when tT and t > 3 T 4 . Finally, the HRSA is finished when it reaches the end criterion. The complete rule of the recommended HRSA is shown in Figure 1.

3. Results and Discussion

In this section, the proposed HRSA method is evaluated using twenty-three common benchmark functions.

3.1. Experimental Settings

The results of the proposed HRSA method are compared with other state-of-the-art methods including Aquila Optimizer (AO), Dwarf Mongoose Optimization Algorithm (DMOA), Whale Optimization Algorithm (WOA), Sine Cosine Algorithm (SCA), Dragonfly Algorithm (DA), Grey Wolf Optimizer (GWO), Particle Swarm Optimizer (PSO), Ant Lion Optimizer (ALO), Reptile Search Algorithm (RSA), Remora Optimization Algorithm (ROA), Arithmetic Optimization Algorithm (AOA), and the proposed HRSA method. All the tested methods are tuned according to the original paper and its parameters, as shown in Table 1. Each algorithm is executed 30 times using 50 solutions and 1000 iterations.
Twenty-three benchmark problems are used with different characterizations, as shown in Table 2. These test functions are categorized into three main categories, unimodal, multimodal, and multimodal, with fixed dimension functions. They are always used in the domain of machine learning and optimization algorithms [30,31].

3.2. Benchmark Function: Experiments

In this section, the results of the comparative methods are presented in several formats to validate their performances. Figure 2 shows the qualitative analysis of the first thirteen benchmark functions (F1–F13) in terms of function topology, trajectory curves of the first dimension values, the average fitness function values, and the convergence curves of the original methods (i.e., RSA and ROA) and the proposed HRSA method.
It is clear from Figure 2 that the proposed method obtained promising results, and it can be an excellent alternative in this domain. The proposed method reaches near-optimal solutions for all the tested functions according to the functions typologies. Additionally, the trajectory curves of the first dimension values show that the proposed method acceptably changes the values. This refers to the diversity of the position-updating mechanisms. The proposed method obtained the minimal values regarding the average fitness function values. The convergence is always near the optimal area. Finally, the convergence curves show that the proposed method overwhelmed the original methods in all the tested problems.
Table 3 shows the proposed HRSA method in solving thirteen benchmark functions with different population sizes. This experiment was conducted to find the optimal number of solutions to proceed with the following experiments. It is clear from Table 3 that the proposed HRSA obtained the best results when the population size was equal to 50, followed by a population of 45. According to the given results from the Friedman ranking test, when the population size is equal to 50, the proposed HRSA is ranked as the first best method followed by the number of solutions 45, 35, 40, 30, 25, 20, 15, 10, and 5. Thus, the proposed method obtains better results when the number of solutions is larger.
Table 4 shows the results of the comparative methods and the proposed HRSA for solving thirteen benchmark functions problems (F1–F13) when the dimension size is equal to 10. It is clear from this table that the proposed method obtained more promising results compared to the other comparative methods. This outcome proved the ability of the modified method to avoid the main problems of the original methods. According to the Wilcoxon signed-rank test, the proposed HRSA overwhelmed almost all the comparative methods in all the given problems. For example, the proposed HRSA significantly overwhelmed DOMA and DA in F1. Additionally, it significantly improved compared to ROA, PSO, GWO, DA, DMOA, and WOA in F4. According to the Friedman ranking test, the proposed HRSA is ranked as the first-best method, followed by the AO method as the second-best method, the RSA as the third-best method, the GWO as the fourth-best method, the AOA as the fifth-best method, the WOA as the sixth-best method, the PSO as the seventh-best method, the ROA as the eighth-best method, the DMOA as the ninth-best method, the SCA as the tenth-best method, the ALO as the eleventh-best method, and finally, the DA as the twentieth-best method. We concluded that the proposed method can find better solutions than the other methods’ solutions.
Table 5 shows the results of the comparative methods and the proposed HRSA for solving thirteen benchmark functions problems (F1–F13) when the dimension size is equal to 100. It is clear from this table that the proposed method achieved better results compared to the other comparative methods for these cases. This result demonstrated the improved method’s capacity to overcome the original method’s major flaws. According to the Wilcoxon signed-rank test, the proposed HRSA overwhelmed almost all the comparative methods in all the given cases. For example, the proposed HRSA significantly overwhelmed ROA, PSO, GWO, SCA, and DMOA in F1. Additionally, it led to significant improvements compared to AOA, ROA, RSA, ALO, PSO, GWO, SCA, DMOA, and WOA in F10. According to the Friedman ranking test, the proposed HRSA is ranked as the first-best method, followed by the AO method as the second-best method, the RSA as the third-best method, the WOA as the fourth-best method, the GWO as the fifth-best method, the AOA as as the sixth-best method, the PSO as the seventh-best method, the DMOA as the eighth-best method, the ROA as the eighth-best method, the ALO as the tenth-best method, the DA as the eleventh-best method, and finally, the SCA as the twentieth-best method. We came to the conclusion that the proposed HRSA can find better solutions than previous methods when the dimensions are greater (i.e., Dim = 100).
Table 6 shows the results of the comparative methods and the proposed HRSA methods for fixed dimensional functions (F14–F23). As shown in the table, the suggested technique outperformed the existing comparison methods in these circumstances. This result revealed the enhanced approach’s ability to overcome the critical weaknesses of the old method. According to the Wilcoxon signed-rank test, the proposed HRSA overwhelmed almost all the comparative methods in all the given cases. For example, the proposed HRSA significantly overwhelmed DMOA, SCA, DA, ALO, RSA, and ROA in F14. Additionally, it gave a significant improvement compared to AO, DMOA, and SCA in F23. According to the Friedman ranking test, the proposed HRSA ranked as the first-best method, followed by the RSA method as the second-best method, the ROA as the third-best method, the GWO as the third-best method, the ALO as the fifth-best method, the DMOA as the sixth-best method, the GWO as the seventh-best method, the AO as the eighth-best method, the DA as the ninth-best method, the WOA as the tenth-best method, the SCA as the eleventh-best method, and finally, the AOA as the twentieth-best method. We came to the conclusion that the proposed HRSA can find better solutions than previous methods when the dimensions are fixed.
The convergence behaviors of the comparative optimization methods employing the evaluated twenty-three benchmark functions are shown in Figure 3. The optimization history and trajectory during the evolution processes are depicted in these diagrams. In comparison to the previous comparative approaches, it is evident that the proposed HRSA obtains optimum solutions in all of the studied cases and converges to the optimal solution smoothly. Furthermore, the observed convergence tendency is appealing and reinforces the original method’s primary flaws. As a result, the primary issues with the original approaches are addressed, as seen in the figures. Figure 4 shows the distribution values of the given position during the optimization process. It is clear that the proposed method generates higher distribution values during the improvement (optimization) process. This evidences that modifying the original method achieved new results that satisfied the proposed idea.

3.3. Data Clustering: Experiments

In this section, the proposed method is tested further for solving the data clustering problems.
Eight problems are used with different characterizations shown in Table 7: the number of features, objects, and clusters. These datasets were taken from the UCI repository. They are always used in the domain of machine learning and clustering algorithms. The results of the proposed method are compared with other methods such as Aquila Optimizer (AO) [32], Particle Swarm Optimizer (PSO) [33], Grey Wolf Optimizer (GWO) [34], African Vultures Optimization Algorithm (AVOA) [35], Whale Optimization Algorithm (WOA) [36], Reptile Search Algorithm (RSA) [27], Remora Optimization Algorithm (ROA) [28], Arithmetic Optimization Algorithm (AOA) [37], and the proposed HRSA method. All the tested methods are tuned according to the original paper and its parameters. Each method was executed 30 times using 50 solutions and 1000 iterations.
Table 8 shows the obtained results of the proposed HRSA method and other comparative clustering methods using eight datasets. This table presents and measures the results of nine comparative algorithms in terms of the worst, best, average fitness function values. Additionally, the standard deviation values are presented to show the variate of the presented results. The Wilcoxon signed-rank test was also used to show the degree of improvements achieved by the proposed method compared to other comparative methods. Then, the Friedman ranking test was used to find the rank of the tested methods for all the used datasets.
In Table 8, the proposed HRSA achieved better results in almost all the tested problems compared to the other comparative methods. The proposed method proved its performance in solving the given problems. It is the best method overall, compared to the other methods. The proposed method reaches optimal solutions in almost all the tested datasets, and its solutions are better than the previous solutions. According to the Wilcoxon signed-rank test, the proposed HRSA overwhelmed almost all the comparative methods. For example, the proposed HRSA significantly overwhelmed the cancer dataset’s ROA, GWO, PSO, and AO. Additionally, it achieved significant improvement compared to ROA, WOA, GWO, PSO, and AO in the class dataset. According to the Friedman ranking test, the proposed HRSA is ranked the first-best method, followed by the PSO method as the second best method, the GWO as the third-best method, the AO as the fourth-best method, the ROA as the fifth-best method, the AOA and RSA as the sixth-best methods, the WOA as the eight best method, and finally, the AVOA as the ninth-best method. The final results are explained in Figure 5.
Figure 6 shows the convergence behavior of the comparative optimization algorithms using the eight tested data clustering problems. These figures are used to show the optimization history and the trajectory during the evolution processes. It is clear that the proposed HRSA reaches the optimal solutions in all the tested problems and converges to the optimal solution smoothly compared to the other comparative methods. Moreover, the convergence behavior observed is attractive and strengthens the main weaknesses of the original method. Thus, the main problems of the original methods are solved as given in the given figures.
Figure 7 shows the clustering plot images using the eight tested datasets. This figure is used to display the clustered objects and the distribution of the objective over the specified area. The given results clearly show that the given objects are distributed in a similar area, and each area presents a similar topic. The clustering accuracy is recognized, which proves the performance of the proposed HRSA method. The execution time ranking is given in Figure 8, which clearly shows that the proposed method has a good execution time compared to other comparative methods.
We concluded that the proposed method obtained promising results in almost all the tested problems. In comparison with other methods, the proposed method proved superior in solving the tested problems. As mentioned above, the proposed HRSA has been tested on solving the most common and standard benchmark functions; these functions are typically used to evaluate the performance of the optimization methods. According to the reported results, the proposed method obviously proved its ability to deal with different benchmark functions. Moreover, the data clustering problems are also used to evaluate the performance of the proposed method as a real-world application. The results given above showed that the proposed method obtained better results in almost all the tested problems and overwhelmed the comparative methods.

4. Conclusions

Various optimization methods have been employed in the literature to solve different machine learning problems, especially clustering problems. Data clustering is one of the complex data mining problems that clusters the given objectives, which usually amounts to a massive number of data objects, into a predefined number of clusters. Optimization methods usually suffer from optimal local problems and unbalance between the search mechanisms (i.e., exploration and exploitation). This paper proposed a novel hybrid optimization search method for solving challenging optimization problems. The proposed method is called HRSA, which combined the original Reptile Search Algorithm (RSA) and Remora Optimization Algorithm (ROA) and handled these mechanisms’ search process by a novel transition method. The proposed HRSA method solves the main weaknesses raised by the original methods and finds better solutions. The proposed HRSA is tested on various complicated optimization problems: twenty-three benchmark test functions and eight data clustering problems. The results illustrated that the proposed HRSA method performs significantly better than the original and comparative state-of-the-art methods. The proposed method was overwhelmingly better than all the other methods according to the mathematical problems. It obtained promising results in solving the clustering problems. Thus, HRSA has a remarkable efficacy when employed for various clustering problems.
In future work, the proposed method can be further studied to discover the weaknesses that can be improved. Additionally, it can be hybridized with other metaheuristic components. Moreover, the proposed HRSA can be investigated to solve other problems such as text clustering, text classification, image enhancement, image segmentation, task scheduling in computing, parameter estimation, forecasting problems, advanced mathematical problems, prediction problems, industrial problems, engineering problems, constrained mechanical design problems, home energy management, wastewater quality parameters, and other real-world problems [38,39,40]. The limitation of this paper is that the used data sets for the data clustering can be real data in the future. Additionally, in some cases, the proposed method needs more execution time.

Author Contributions

K.H.A.: Conceptualization, supervision, methodology, formal analysis, resources, data curation, and writing—original draft preparation. L.A.: Conceptualization, supervision, writing—review and editing, and project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available from the authors upon reasonable request.

Acknowledgments

The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4320277DSR02).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fiorini, L.; Cavallo, F.; Dario, P.; Eavis, A.; Caleb-Solly, P. Unsupervised machine learning for developing personalised behaviour models using activity data. Sensors 2017, 17, 1034. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Alswaitti, M.; Albughdadi, M.; Isa, N.A.M. Density-based particle swarm optimization algorithm for data clustering. Expert Syst. Appl. 2018, 91, 170–186. [Google Scholar] [CrossRef]
  3. Gao, K.; Khan, H.A.; Qu, W. Clustering with Missing Features: A Density-Based Approach. Symmetry 2022, 14, 60. [Google Scholar] [CrossRef]
  4. Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. (CSUR) 1999, 31, 264–323. [Google Scholar] [CrossRef]
  5. Kushwaha, N.; Pant, M.; Kant, S.; Jain, V.K. Magnetic optimization algorithm for data clustering. Pattern Recognit. Lett. 2018, 115, 59–65. [Google Scholar] [CrossRef]
  6. Kant, S.; Ansari, I.A. An improved K means clustering with Atkinson index to classify liver patient dataset. Int. J. Syst. Assur. Eng. Manag. 2016, 7, 222–228. [Google Scholar] [CrossRef]
  7. Rout, R.; Parida, P.; Alotaibi, Y.; Alghamdi, S.; Khalaf, O.I. Skin lesion extraction using multiscale morphological local variance reconstruction based watershed transform and fast fuzzy C-means clustering. Symmetry 2021, 13, 2085. [Google Scholar] [CrossRef]
  8. Chander, S.; Vijaya, P.; Dhyani, P. Multi kernel and dynamic fractional lion optimization algorithm for data clustering. Alex. Eng. J. 2018, 57, 267–276. [Google Scholar] [CrossRef]
  9. George, G.; Parthiban, L. Multi objective hybridized firefly algorithm with group search optimization for data clustering. In Proceedings of the 2015 IEEE International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), Kolkata, India, 20–22 November 2015; pp. 125–130. [Google Scholar]
  10. Kumar, Y.; Kaur, A. Variants of bat algorithm for solving partitional clustering problems. Eng. Comput. 2021, 1–27. Available online: https://www.springerprofessional.de/en/variants-of-bat-algorithm-for-solving-partitional-clustering-pro/18956468 (accessed on 20 January 2022).
  11. Mittal, H.; Pandey, A.C.; Saraswat, M.; Kumar, S.; Pal, R.; Modwel, G. A comprehensive survey of image segmentation: Clustering methods, performance parameters, and benchmark datasets. Multimed. Tools Appl. 2021, 1–26. Available online: https://link.springer.com/article/10.1007/s11042-021-10594-9 (accessed on 20 January 2022).
  12. Yin, X.; Chen, S.; Hu, E.; Zhang, D. Semi-supervised clustering with metric learning: An adaptive kernel method. Pattern Recognit. 2010, 43, 1320–1333. [Google Scholar] [CrossRef] [Green Version]
  13. Koryshev, N.; Hodashinsky, I.; Shelupanov, A. Building a fuzzy classifier based on whale optimization algorithm to detect network intrusions. Symmetry 2021, 13, 1211. [Google Scholar] [CrossRef]
  14. Hussein, A.M.; Abdullah, R.; AbdulRashid, N. Flower Pollination Algorithm With Profile Technique For Multiple Sequence Alignment. In Proceedings of the 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), Amman, Jordan, 9–11 April 2019; pp. 571–576. [Google Scholar]
  15. Hussein, A.M.; Abdullah, R.; AbdulRashid, N.; Ali, A.N.B. Protein multiple sequence alignment by basic flower pollination algorithm. In Proceedings of the 2017 8th International Conference on Information Technology (ICIT), Amman, Jordan, 17–18 May 2017; pp. 833–838. [Google Scholar]
  16. Das, P.; Das, D.K.; Dey, S. A new class topper optimization algorithm with an application to data clustering. IEEE Trans. Emerg. Top. Comput. 2018, 8, 948–959. [Google Scholar] [CrossRef]
  17. Alam, S.; Dobbie, G.; Riddle, P. An evolutionary particle swarm optimization algorithm for data clustering. In Proceedings of the 2008 IEEE Swarm Intelligence Symposium, St. Louis, MO, USA, 21–23 September 2008; pp. 1–6. [Google Scholar]
  18. Agbaje, M.B.; Ezugwu, A.E.; Els, R. Automatic data clustering using hybrid firefly particle swarm optimization algorithm. IEEE Access 2019, 7, 184963–184984. [Google Scholar] [CrossRef]
  19. Rahnema, N.; Gharehchopogh, F.S. An improved artificial bee colony algorithm based on whale optimization algorithm for data clustering. Multimed. Tools Appl. 2020, 79, 32169–32194. [Google Scholar] [CrossRef]
  20. Bijari, K.; Zare, H.; Veisi, H.; Bobarshad, H. Memory-enriched big bang–big crunch optimization algorithm for data clustering. Neural Comput. Appl. 2018, 29, 111–121. [Google Scholar] [CrossRef] [Green Version]
  21. Kao, Y.T.; Zahara, E.; Kao, I.W. A hybridized approach to data clustering. Expert Syst. Appl. 2008, 34, 1754–1762. [Google Scholar] [CrossRef]
  22. Mahmoudi, M.R.; Akbarzadeh, H.; Parvin, H.; Nejatian, S.; Rezaie, V.; Alinejad-Rokny, H. Consensus function based on cluster-wise two level clustering. Artif. Intell. Rev. 2021, 54, 639–665. [Google Scholar] [CrossRef]
  23. Mahapatra, S.; Dey, B.; Raj, S. A novel ameliorated Harris hawk optimizer for solving complex engineering optimization problems. Int. J. Intell. Syst. 2021, 36, 7641–7681. [Google Scholar] [CrossRef]
  24. Raj, S.; Bhattacharyya, B. Optimal placement of TCSC and SVC for reactive power planning using Whale optimization algorithm. Swarm Evol. Comput. 2018, 40, 131–143. [Google Scholar] [CrossRef]
  25. Shaikh, M.S.; Hua, C.; Raj, S.; Kumar, S.; Hassan, M.; Ansari, M.M.; Jatoi, M.A. Optimal parameter estimation of 1-phase and 3-phase transmission line for various bundle conductor’s using modified whale optimization algorithm. Int. J. Electr. Power Energy Syst. 2022, 138, 107893. [Google Scholar] [CrossRef]
  26. Shekarappa, G.S.; Mahapatra, S.; Raj, S. Voltage constrained reactive power planning problem for reactive loading variation using hybrid harris hawk particle swarm optimizer. Electr. Power Components Syst. 2021, 49, 421–435. [Google Scholar] [CrossRef]
  27. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  28. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  29. Shinawi, A.E.; Ibrahim, R.A.; Abualigah, L.; Zelenakova, M.; Elaziz, M.A. Enhanced Adaptive Neuro-Fuzzy Inference System Using Reptile Search Algorithm for Relating Swelling Potentiality Using Index Geotechnical Properties: A Case Study at El Sherouk City, Egypt. Mathematics 2021, 9, 3295. [Google Scholar] [CrossRef]
  30. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  31. Oyelade, O.N.; Ezugwu, A.E.; Mohamed, T.I.; Abualigah, L. Ebola Optimization Search Algorithm: A new nature-inspired metaheuristic algorithm. IEEE Access 2022. [Google Scholar] [CrossRef]
  32. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  33. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  34. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  35. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  37. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  38. Gupta, S.; Abderazek, H.; Yıldız, B.S.; Yildiz, A.R.; Mirjalili, S.; Sait, S.M. Comparison of metaheuristic optimization algorithms for solving constrained mechanical design optimization problems. Expert Syst. Appl. 2021, 183, 115351. [Google Scholar] [CrossRef]
  39. Wang, X.; Mao, X.; Khodaei, H. A multi-objective home energy management system based on internet of things and optimization algorithms. J. Build. Eng. 2021, 33, 101603. [Google Scholar] [CrossRef]
  40. Abunama, T.; Ansari, M.; Awolusi, O.O.; Gani, K.M.; Kumari, S.; Bux, F. Fuzzy inference optimization algorithms for enhancing the modelling accuracy of wastewater quality parameters. J. Environ. Manag. 2021, 293, 112862. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the proposed HRSA.
Figure 1. Flowchart of the proposed HRSA.
Symmetry 14 00458 g001
Figure 2. Qualitative analysis of F1–F13.
Figure 2. Qualitative analysis of F1–F13.
Symmetry 14 00458 g002aSymmetry 14 00458 g002b
Figure 3. Convergence curves of HRSA and other methods for F1–F23.
Figure 3. Convergence curves of HRSA and other methods for F1–F23.
Symmetry 14 00458 g003aSymmetry 14 00458 g003bSymmetry 14 00458 g003cSymmetry 14 00458 g003dSymmetry 14 00458 g003e
Figure 4. The distribution vales of the given position during the optimization process.
Figure 4. The distribution vales of the given position during the optimization process.
Symmetry 14 00458 g004
Figure 5. Execution time ranking for the benchmark functions.
Figure 5. Execution time ranking for the benchmark functions.
Symmetry 14 00458 g005
Figure 6. Convergence behaviour of the comparative algorithms using the tested data clustering problems.
Figure 6. Convergence behaviour of the comparative algorithms using the tested data clustering problems.
Symmetry 14 00458 g006aSymmetry 14 00458 g006b
Figure 7. Clustering plot images using 8 datasets.
Figure 7. Clustering plot images using 8 datasets.
Symmetry 14 00458 g007aSymmetry 14 00458 g007b
Figure 8. Execution time ranking for the data clustering problems.
Figure 8. Execution time ranking for the data clustering problems.
Symmetry 14 00458 g008
Table 1. Parameter values of the tested algorithms.
Table 1. Parameter values of the tested algorithms.
No.AlgorithmParameterValue
1AO α 0.1
δ 0.1
2DMOA--
3WOA α Decreased from 2 to 0
4SCA α 0.05
5DAw0.2–0.9
s, a, and c0.1
f and e1
6GWOConvergence parameter (a)Linear reduction from 2 to 0
7PSOTopologyFully connected
Cognitive and social constant(C1, C2) 2, 2
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
8ALOI ratio 10 w
w2–6
9RSA α 0.1
β 0.1
10ROA
11AOA α 5
μ 0.5
Table 2. Details of the tested benchmark functions.
Table 2. Details of the tested benchmark functions.
FunctionDescriptionDimensionsRange f min
F1 f ( x ) = i = 1 n x i 2 30[−100, 100 ]0
F2 f ( x ) = i = 0 n | x i | + i = 0 n | x i | 30[−10, 10]0
F3 f ( x ) = i = 1 d ( j = 1 i x j ) 2 30[−100, 100]0
F4 f ( x ) = m a x i { | x i | , 1 i n } 30[−100, 100]0
F5 f ( x ) = i = 1 n 1 [ 100 ( x i 2 x i + 1 ) 2 + ( 1 x i ) 2 ] 30[−30, 30]0
F6 f ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
F7 f ( x ) = i = 0 n i x i 4 + random [ 0 , 1 ) 30[−128, 128]0
F8 f ( x ) = i = 1 n ( x i s i n ( | x i | ) ) 30[−500, 500]−418.9829 × n
F9 f ( x ) = i = 1 n [ x i 2 10 c o s ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
F10 f ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) e x p ( 1 n i = 1 n c o s ( 2 π x i ) ) + 20 + e 10, 50, 100, 500[−32, 32]0
F11 f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n c o s ( x i i ) 10, 50, 100, 500[−600, 600]0
F12 f ( x ) = π n 10 s i n ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 s i n 2 ( π y i + 1 ) + i = 1 n u ( x i , 10 , 100 , 4 ) ] , where y i = 1 + x i + 1 4 , u ( x i , a , k , m ) K ( x i a ) m if x i > a 0 a x i a K ( x i a ) m a x i 10, 50, 100, 500[−50, 50]0
F13 f ( x ) = 0.1 ( s i n 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 1 + s i n 2 ( 2 π x n ) ) + i = 1 n u ( x i , 5 , 100 , 4 ) 10, 50, 100, 500[−50, 50]0
F14 f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 1 2[−65, 65]1
F15 f ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F16 f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F17f(x)= x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 ( 1 1 8 π ) c o s   x 1 + 10 2[−5, 5]0.398
F18 f ( x ) = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) × 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x i + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) 2[−2, 2]3
F19 f ( x ) = i = 1 4 c i e x p i = 1 3 a i j ( x j p i j ) 2 3[−1, 2]−3.86
F20 f ( x ) = i = 1 4 c i e x p i = 1 6 a i j ( x j p i j ) 2 6[0, 1]−0.32
F21 f ( x ) = i = 1 5 ( X a i ) ( X a i ) T + c i 1 4[0, 1]−10.1532
F22 f ( x ) = i = 1 7 ( X a i ) ( X a i ) T + c i 1 4[0, 1]−10.4028
F23 f ( x ) = i = 1 10 ( X a i ) ( X a i ) T + c i 1 4[0, 1]−10.5363
Table 3. The results of several population sizes on F1–F13.
Table 3. The results of several population sizes on F1–F13.
FunctionMeasure5101520253035404550
F1Worst2.4747 × 10 4 6.4678E-1362.8320E-1772.2056E-2085.9945E-2055.8746E-2296.0049E-2271.3563E-2303.3278E-2541.3704E-266
Average6.1871E-1051.6169E-1367.0802E-1785.5316E-2091.4989E-2051.4686E-2291.5012E-2273.3908E-2318.3559E-2556.0726E-267
Best1.7132E-1222.3073E-1811.0868E-1882.5028E-2303.0316E-2291.1620E-2501.0445E-2542.0923E-2604.1230E-2681.7295E-268
STD1.2373E-1043.2339E-1360.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
p-value3.5589E-013.5592E-010.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00NaN
h001111111NaN
Rank10986745321
F2Worst6.3944E-475.2868E-674.0457E-801.1923E-881.7276E-1006.8430E-1101.2810E-1001.8795E-1242.0288E-1231.4249E-128
Average1.5986E-471.3217E-671.0114E-802.9806E-894.3190E-1011.7115E-1103.2025E-1014.7064E-1255.2010E-1243.5621E-129
Best3.3103E-604.1024E-892.7546E-941.0822E-1037.0032E-1182.6356E-1203.0698E-1282.1073E-1313.8662E-1478.3559E-146
STD3.1972E-472.6434E-672.0229E-805.9613E-898.6380E-1013.4210E-1106.4050E-1019.3922E-1251.0061E-1237.1243E-129
p-value3.5592E-013.5592E-013.5592E-013.5592E-013.5592E-013.5567E-013.5592E-013.5497E-013.4104E-01NaN
h000000000NaN
Rank10987645231
F3Worst1.1542E-952.8253E-1209.8997E-1571.8030E-1782.3521E-1871.3326E-2117.9637E-2132.8098E-2222.8110E-2331.6957E-242
Average2.8854E-967.0633E-1212.4749E-1574.5380E-1795.8803E-1883.3315E-2121.9910E-2137.4675E-2237.0275E-2344.2392E-243
Best3.0414E-1147.5406E-1521.8961E-1822.1924E-2043.3927E-2331.0810E-2461.7336E-2441.0691E-2484.9841E-2701.5279E-254
STD5.7708E-961.4127E-1204.9498E-1570.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
p-value3.5592E-013.5592E-013.5592E-010.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00NaN
h000111111NaN
Rank10987654321
F4Worst6.7596E-468.2864E-741.9127E-751.7531E-931.1163E-1052.6519E-1161.4328E-1202.4350E-1192.1190E-1261.7474E-105
Average1.6904E-462.0807E-744.7818E-764.3827E-942.7908E-1066.6381E-1175.5115E-1216.0987E-1205.8450E-1274.3685E-106
Best4.7321E-656.0592E-864.5588E-1015.9253E-1101.2821E-1211.7840E-1268.5900E-1325.2160E-1296.2314E-1442.3244E-136
STD3.3795E-464.1371E-749.5637E-768.7654E-945.5814E-1061.3254E-1166.9125E-1211.2168E-1191.0282E-1268.7370E-106
p-value3.5575E-013.5330E-013.5592E-013.5592E-017.7114E-013.5592E-013.5592E-013.5592E-013.5592E-01NaN
h000000000NaN
Rank10987542316
F5Worst7.3201E-014.2782E+008.8137E-022.0246E-011.1176E+006.5509E-014.8630E-011.8790E-016.5660E-034.7512E-03
Average3.7175E-012.1207E+004.6264E-021.1134E-016.2657E-011.7565E-012.4688E-017.6736E-024.1207E-032.7667E-03
Best2.6965E-023.5751E-022.1097E-032.5725E-022.7785E-053.2879E-039.0213E-039.5005E-042.2449E-041.0521E-03
STD3.9810E-012.1511E+004.4485E-029.0553E-025.5939E-013.2008E-012.4001E-019.0754E-022.7340E-031.5925E-03
p-value1.1321E-019.6456E-029.8468E-025.3473E-026.7238E-023.2152E-018.8159E-021.5426E-014.2490E-01NaN
h000000000NaN
Rank81035967421
F6Worst4.7030E-024.2363E-023.4359E-028.7423E-045.6272E-042.1748E-031.2425E-041.3411E-052.5108E-041.2028E-03
Average1.4920E-021.9973E-022.0603E-023.6704E-043.1818E-045.7056E-043.5022E-058.5049E-067.3295E-053.1146E-04
Best1.3919E-042.3726E-035.1250E-051.3368E-044.9138E-052.1389E-064.5662E-071.5411E-091.0425E-054.5718E-06
STD2.2106E-021.6781E-021.5420E-023.4119E-042.2059E-041.0705E-035.9621E-055.8732E-061.1857E-045.9438E-04
p-value2.3457E-015.7698E-023.9066E-028.7648E-019.8378E-016.8688E-013.9040E-013.4736E-014.6182E-01NaN
h001000000NaN
Rank89106572134
F7Worst3.7601E-031.8822E-034.1801E-048.7768E-049.9437E-043.0691E-043.7996E-046.6713E-044.1421E-043.8482E-04
Average1.5895E-038.3426E-043.4047E-044.2797E-044.0431E-041.0489E-041.5896E-042.6070E-042.1306E-041.6105E-04
Best4.3482E-041.3493E-042.2475E-042.0057E-041.1769E-043.2972E-064.2216E-062.5180E-065.6248E-051.2834E-05
STD1.5014E-037.9204E-048.9592E-053.0489E-044.0016E-041.3758E-041.6578E-043.0787E-041.5096E-041.8018E-04
p-value1.0776E-011.4847E-011.2481E-011.8244E-013.1004E-016.3793E-019.8696E-015.9658E-016.7359E-01NaN
h000000000NaN
Rank10968712543
F8Worst−2.9767E+03−4.1011E+03−4.1441E+03−4.1849E+03−4.1786E+03−3.2613E+03−4.1882E+03−4.1859E+03−4.1801E+03−4.1859E+03
Average−3.2853E+03−4.1648E+03−4.1761E+03−4.1869E+03−4.1842E+03−3.9426E+03−4.1889E+03−4.1880E+03−4.1861E+03−4.1884E+03
Best−4.1887E+03−4.1891E+03−4.1896E+03−4.1881E+03−4.1886E+03−4.1881E+03−4.1896E+03−4.1894E+03−4.1898E+03−4.1897E+03
STD6.0233E+024.2628E+012.1419E+011.4465E+004.9729E+004.5495E+025.9991E-011.5980E+004.2186E+001.6695E+00
p-value2.4045E-023.1031E-012.9508E-012.2261E-011.5675E-013.2146E-015.6735E-017.4173E-013.5032E-01NaN
h100000000NaN
Rank10874691352
F9Worst0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
Average0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
Best0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
STD0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
p-valueNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
hNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
Rank1111111111
F10Worst8.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-16
Average8.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-16
Best8.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-168.8818E-16
STD0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
p-valueNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
hNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
Rank1111111111
F11Worst0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
Average0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
Best0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
STD0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
p-valueNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
hNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
Rank1111111111
F12Worst4.0854E-022.2289E-041.3389E-036.8061E-045.6103E-041.7179E-041.0947E-047.3013E-045.0295E-052.0846E-05
Average1.7935E-026.9673E-056.5105E-042.9193E-042.0389E-046.2899E-054.1642E-052.6116E-043.0538E-055.6652E-06
Best4.3072E-037.2701E-062.2544E-061.9810E-057.6314E-073.2039E-063.2509E-075.7498E-067.3791E-062.7871E-07
STD1.7213E-021.0248E-047.0379E-043.0985E-042.5034E-047.5121E-054.7190E-053.4009E-041.7759E-051.0124E-05
p-value8.2358E-022.6020E-011.1637E-011.1430E-011.6466E-011.8175E-011.8660E-011.8382E-015.0921E-021.0000E+00
h0000000000
Rank10598643721
F13Worst7.4368E-021.5589E-027.8932E-042.0129E-032.2940E-032.1774E-042.3244E-042.0152E-048.1074E-041.0923E-04
Average2.3931E-023.9380E-032.2898E-048.3051E-048.9919E-041.3918E-048.5703E-051.1051E-043.0715E-047.3491E-05
Best2.4372E-034.2232E-062.0465E-051.4455E-043.8590E-056.9042E-059.0061E-063.9783E-051.9733E-055.4162E-07
STD3.4057E-027.7679E-033.7411E-048.8825E-041.0825E-036.1273E-051.0189E-047.0518E-053.4733E-045.0235E-05
p-value2.1075E-013.5817E-014.4154E-011.3969E-011.7838E-011.4839E-018.3690E-014.2532E-012.3136E-01NaN
h000000000NaN
Rank10957842361
Mean ranking7.6154E+006.8462E+005.7692E+005.2308E+005.2308E+003.9231E+002.7692E+002.8462E+002.5385E+001.8462E+00
Final ranking10986653421
Table 4. The results of the comparative methods and the proposed HRSA methods for F1–F13, Dim = 10.
Table 4. The results of the comparative methods and the proposed HRSA methods for F1–F13, Dim = 10.
FunMeasureComparative Algorithms
AODMOAWOASCADAGWOPSOALORSAROAAOAHRSA
F1Worst1.1571E-1002.1046E-097.6673E-632.6611E-092.6788E+011.1340E-472.2231E-172.2742E-079.5757E-566.8209E-115.2731E-702.0773E-108
Average2.8927E-1011.1143E-091.9168E-637.2690E-101.7523E+013.2797E-481.0342E-179.2064E-082.9793E-562.3922E-111.3183E-705.1931E-109
Best3.0073E-1165.0980E-101.1259E-724.4844E-138.6835E+003.8585E-515.8258E-201.8748E-086.6334E-601.9429E-123.2582E-1523.4580E-138
STD5.7855E-1016.8881E-103.8336E-631.2935E-098.6400E+005.3931E-481.1728E-179.3965E-084.5319E-563.0232E-112.6365E-701.0386E-108
p-value3.5592E-011.7790E-023.5592E-013.0400E-016.6787E-032.6956E-011.2823E-019.7758E-022.3659E-011.6461E-013.5592E-01NaN
h01001000000NaN
Rank210491267115831
F2Worst6.3194E-601.6949E-027.4584E-523.5524E-092.1903E+001.6706E-273.9759E-105.6065E-011.4412E-329.6848E-090.0000E+005.3364E-49
Average1.5799E-604.2476E-033.3970E-521.3483E-091.4024E+006.0495E-282.3544E-101.4063E-016.9571E-336.0154E-090.0000E+001.4410E-49
Best7.3417E-751.1089E-055.7800E-541.5327E-112.4793E-011.6624E-284.9298E-111.5065E-052.1170E-331.5222E-090.0000E+001.5588E-55
STD3.1597E-608.4673E-033.7501E-521.6440E-098.3182E-017.1418E-281.4544E-102.8002E-015.7688E-333.4094E-090.0000E+002.6047E-49
p-value3.1092E-013.5445E-013.1197E-011.5207E-011.5007E-021.4118E-011.7740E-023.5393E-015.2432E-021.2387E-023.1092E-01NaN
h00001010010NaN
Rank210381267115914
F3Worst2.5481E-1048.6720E-022.6247E+037.8583E-025.3797E+034.1665E-226.3364E-053.6192E-015.3170E-253.6758E-025.8269E-685.7223E-116
Average6.3723E-1052.1726E-029.6402E+023.7630E-022.6504E+032.2794E-221.9621E-052.3056E-011.3293E-251.4828E-021.4571E-681.4306E-116
Best7.0353E-1121.4915E-051.5954E-053.1383E-062.5278E+011.3713E-231.8828E-068.6392E-022.7157E-329.8927E-046.3905E-1081.7975E-141
STD1.2739E-1044.3330E-021.1759E+034.2007E-023.0089E+031.7380E-222.9324E-051.4879E-012.6585E-251.5408E-022.9132E-682.8611E-116
p-value3.5574E-013.5467E-011.5221E-011.2338E-011.2859E-013.9423E-022.2931E-012.1140E-023.5592E-011.0260E-013.5578E-01NaN
h00000101000NaN
Rank281191256104731
F4Worst4.3892E-612.9025E-051.6160E+016.5799E-035.9530E+002.8186E-157.4656E-049.9430E-025.3933E-204.2407E+008.1594E-183.0089E-51
Average1.0973E-612.3631E-059.1448E+003.2700E-034.1841E+001.9128E-153.9903E-043.7199E-022.5279E-202.7561E+002.0398E-187.7646E-52
Best9.6316E-791.6075E-051.3178E-012.3195E-042.7084E+009.4618E-167.2129E-052.7512E-031.3331E-211.4819E+002.2242E-676.9049E-60
STD2.1946E-616.2330E-066.6791E+003.1005E-031.6482E+001.0353E-153.1951E-044.3576E-022.7085E-201.1532E+004.0797E-181.4890E-51
p-value3.3717E-012.7356E-043.3811E-027.9449E-022.2717E-031.0148E-024.6670E-021.3863E-011.1120E-013.0628E-033.5592E-01NaN
h01101110010NaN
Rank161281157931042
F5Worst1.0090E-026.8476E+027.4820E+008.7223E+007.1798E+027.1956E+005.8258E+003.7504E+025.8100E+001.1307E+018.6967E+001.5772E+00
Average2.9026E-031.9907E+027.1510E+008.1759E+002.6870E+026.6393E+005.2538E+001.8256E+025.5321E+005.9579E+008.4321E+008.0915E-01
Best2.3533E-048.2388E+006.7911E+007.8712E+007.2106E+016.1326E+004.5532E+005.7113E+005.2342E+002.0689E+008.1501E+008.5112E-02
STD4.7968E-033.2628E+022.8261E-013.7457E-013.0211E+025.3406E-015.3658E-012.0333E+022.3553E-013.9620E+002.2871E-016.8245E-01
p-value5.6073E-022.6991E-012.4973E-061.4062E-061.2652E-011.0441E-055.0597E-051.2404E-011.2291E-054.2831E-027.2168E-07NaN
h00110110111NaN
Rank111781263104592
F6Worst3.0958E-042.0121E-092.2052E-017.9699E-011.0822E+012.5028E-011.5409E-028.8196E-081.3446E-122.1400E-106.1615E-013.4780E-17
Average1.2661E-041.3904E-096.2350E-025.9809E-015.2641E+006.2573E-024.5459E-033.1304E-083.5418E-136.8957E-114.1272E-011.3592E-17
Best1.8530E-068.1527E-101.8655E-034.7048E-011.1840E-011.3983E-061.3847E-048.7919E-091.8773E-151.4982E-121.9619E-016.5669E-20
STD1.5073E-045.8512E-101.0587E-011.4410E-015.3834E+001.2514E-017.2704E-033.8019E-086.6091E-139.8755E-111.7518E-011.4849E-17
p-value2.6986E-012.5767E-013.1779E-011.7408E-049.8511E-023.9025E-012.5767E-012.5768E-012.5767E-012.5767E-013.4818E-03NaN
h00010000001NaN
Rank648111297523101
F7Worst5.8742E-031.7491E-023.2814E-032.8844E-031.3477E-015.4463E-048.4784E-031.8516E-017.7024E-044.6724E-029.2704E-042.8372E-04
Average3.4183E-031.0077E-022.0021E-031.6652E-036.3525E-024.1147E-046.1809E-031.0811E-014.7066E-042.6367E-024.7710E-041.3225E-04
Best5.0335E-041.3412E-034.7093E-046.0223E-046.4245E-032.8956E-044.3349E-037.1808E-023.4456E-041.3936E-021.3284E-041.1502E-05
STD2.2207E-036.9666E-031.4820E-039.9504E-045.9631E-021.3949E-041.8209E-035.1922E-022.0081E-041.4979E-023.6519E-041.1275E-04
p-value2.5427E-021.1841E-013.2958E-011.9972E-019.0584E-023.5451E-021.0270E-016.8887E-033.8341E-022.3070E-023.9919E-02NaN
h10000101111NaN
Rank796511281231041
F8Worst−1.9843E+03−2.4441E+03−2.2903E+03−1.8464E+03−2.6001E+03−1.9888E+03−1.8626E+03−1.9257E+03−2.7610E+03−2.5286E+03−2.1284E+03−1.7313E+03
Average−2.3457E+03−2.7747E+03−2.8848E+03−1.9811E+03−2.7868E+03−2.3539E+03−2.1875E+03−2.1323E+03−3.2678E+03−3.0607E+03−2.2242E+03−2.9708E+03
Best−3.0047E+03−3.2028E+03−3.7732E+03−2.1068E+03−2.9924E+03−2.8849E+03−2.5174E+03−2.3933E+03−3.7146E+03−3.3592E+03−2.3744E+03−4.1376E+03
STD4.5283E+023.6950E+026.6467E+021.3411E+021.6730E+023.9855E+022.8876E+022.4258E+024.1710E+023.9285E+021.0526E+021.2856E+03
p-value3.9442E-017.7926E-019.0927E-011.7657E-017.8607E-013.9466E-012.7939E-012.4719E-016.7574E-018.9796E-012.9101E-01NaN
h00000000000NaN
Rank864125710111293
F9Worst0.0000E+004.2783E+010.0000E+004.2364E-023.8089E+014.4666E+001.2955E+015.2733E+011.9899E+001.8904E+010.0000E+000.0000E+00
Average0.0000E+002.5869E+010.0000E+001.3584E-022.9275E+011.1166E+008.4629E+003.3331E+014.9748E-011.2437E+010.0000E+000.0000E+00
Best0.0000E+001.1940E+010.0000E+001.7081E-111.9416E+010.0000E+005.9724E+002.3879E+010.0000E+006.9647E+000.0000E+000.0000E+00
STD0.0000E+001.4869E+010.0000E+001.9999E-027.6693E+002.2333E+003.1025E+001.3112E+019.9496E-015.8862E+000.0000E+000.0000E+00
p-valueNaN1.3149E-02NaN2.2319E-012.6354E-043.5592E-011.5792E-032.2563E-033.5592E-015.5259E-03NaNNaN
hNaN1NaN0101101NaNNaN
Rank110151178126911
F10Worst8.8818E-162.3168E+004.4409E-158.4300E-019.6530E+001.5099E-147.5055E-101.1551E+007.9936E-151.6462E+008.8818E-168.8818E-16
Average8.8818E-161.1568E+004.4409E-152.1075E-014.7552E+009.7700E-154.3526E-102.8891E-015.3291E-157.0034E-018.8818E-168.8818E-16
Best8.8818E-161.0766E-054.4409E-153.7578E-081.2042E+007.9936E-151.6850E-101.3290E-044.4409E-151.8206E-068.8818E-168.8818E-16
STD0.0000E+009.4585E-010.0000E+004.2150E-013.6264E+003.5527E-152.6089E-105.7749E-011.7764E-158.3317E-010.0000E+000.0000E+00
p-valueNaN5.0059E-020.0000E+003.5592E-013.9450E-022.4523E-031.5676E-023.5566E-012.4523E-031.4373E-01NaNNaN
hNaN010111010NaNNaN
Rank111481267951011
F11Worst0.0000E+002.9276E-013.5207E-013.2525E-011.6235E+007.9759E-023.8586E-013.3696E-013.6867E-022.6592E-014.6191E-010.0000E+00
Average0.0000E+002.1408E-011.1366E-011.2380E-017.0711E-012.9156E-022.0913E-012.3681E-019.2168E-031.7540E-011.1548E-010.0000E+00
Best0.0000E+001.5241E-010.0000E+002.0962E-022.8492E-010.0000E+003.6894E-021.0578E-010.0000E+001.1568E-010.0000E+000.0000E+00
STD0.0000E+007.2479E-021.6613E-011.3697E-016.1877E-013.7949E-021.7441E-019.6929E-021.8434E-026.3894E-022.3096E-010.0000E+00
p-valueNaN1.0464E-032.2022E-011.2064E-016.2324E-021.7530E-015.3428E-022.7485E-033.5592E-011.5288E-033.5591E-01NaN
hNaN1000001010NaN
Rank110571249113861
F12Worst1.3494E-053.1771E-012.5782E-021.4964E-013.1991E+003.9305E-025.2225E-213.0893E+002.9389E-143.1101E-013.1390E-014.9570E-03
Average7.2659E-067.9520E-021.3371E-021.1590E-012.0861E+002.9329E-021.3975E-212.8577E+001.5102E-142.3402E-012.7789E-012.9149E-03
Best1.5855E-062.5293E-097.7458E-047.7385E-021.0996E+001.9865E-025.4669E-232.4977E+001.2207E-173.0462E-032.5509E-014.1431E-04
STD5.1084E-061.5879E-011.3032E-023.7574E-021.0350E+001.0893E-022.5510E-212.5268E-011.2422E-141.5398E-012.6953E-022.0243E-03
p-value2.8325E-023.7191E-011.6391E-019.6003E-046.9177E-033.1008E-032.8060E-024.9186E-072.8060E-022.3963E-029.1600E-07NaN
h10011111111NaN
Rank375811611229104
F13Worst4.2945E-051.0998E-021.2529E-015.1405E-011.0400E+002.9491E-051.5412E-181.0998E-021.9342E-024.3949E-029.9607E-013.3472E-12
Average1.7247E-052.7496E-036.6985E-024.0342E-015.9742E-011.4054E-055.9650E-192.7507E-038.9798E-031.3734E-029.3206E-019.0107E-13
Best5.7395E-077.7572E-101.4365E-022.9787E-012.5289E-016.3059E-063.5953E-224.1725E-071.2013E-059.5274E-108.5068E-017.2496E-16
STD1.8515E-055.4991E-034.7274E-021.0001E-013.3453E-011.0516E-057.3949E-195.4982E-039.9037E-032.0798E-026.0197E-021.6339E-12
p-value1.2029E-013.1351E-015.3160E-022.2602E-041.2572E-021.2018E-011.1971E-013.1357E-011.1971E-016.9412E-018.6395E-08NaN
h00011000001NaN
Rank459101131678122
Mean ranking3.0000E+008.2308E+006.0769E+008.3077E+001.1077E+015.5385E+006.2308E+009.9231E+003.8462E+007.5385E+005.6154E+001.8462E+00
Final ranking296101247113851
Table 5. The results of the comparative methods and the proposed HRSA methods for F1–F13, Dim = 100.
Table 5. The results of the comparative methods and the proposed HRSA methods for F1–F13, Dim = 100.
FunMeasureComparative Algorithms
AODMOAWOASCADAGWOPSOALORSAROAAOAHRSA
F1Worst4.0782E-983.1718E+015.8226E-611.3866E+033.9007E+038.5638E-177.7635E-017.3039E+021.1830E-284.9992E+011.2631E-022.8449E-106
Average1.0196E-982.2918E+011.4557E-618.8436E+021.6626E+034.9381E-174.7025E-012.2654E+023.2097E-293.5959E+015.8299E-037.1122E-107
Best3.0185E-1239.3735E+001.0884E-731.3827E+023.6367E+025.5387E-181.4462E-015.4808E+012.3156E-302.5600E+011.8742E-107.5850E-149
STD2.0391E-981.0166E+012.9112E-615.6464E+021.6147E+033.3504E-172.6084E-013.3591E+025.7483E-291.0735E+016.1114E-031.4224E-106
p-value3.5590E-014.0661E-033.5589E-012.0260E-028.5130E-022.5688E-021.1290E-022.2608E-013.0682E-015.3690E-041.0502E-01NaN
h01010110010NaN
Rank283111257104961
F2Worst3.7433E-501.0812E+025.4876E-473.8977E-011.8393E+012.7882E-105.2010E+002.0965E+025.2108E-182.2379E+002.6869E-321.5364E-60
Average9.3863E-514.1687E+011.4222E-472.0333E-011.2972E+011.8202E-103.3373E+001.0497E+023.5046E-181.5655E+006.7172E-333.8427E-61
Best1.2613E-571.1616E+011.6943E-514.6794E-028.4600E+006.8138E-111.2963E+002.3419E+011.5445E-181.0772E+006.7796E-601.3859E-75
STD1.8698E-504.4775E+012.7117E-471.5993E-014.1353E+009.3938E-111.6588E+009.4076E+011.5059E-185.0391E-011.3434E-327.6807E-61
p-value3.5413E-011.1190E-013.3491E-014.3918E-027.6223E-048.2145E-036.9291E-036.7126E-023.4872E-038.0222E-043.5592E-01NaN
h00011110110NaN
Rank211371069125841
F3Worst2.36803E-9316951.21302420591.185759991.69346110891.90873.6001435923296.37876360132.021420.0313462969792.6617940.3145879152.4468E-111
Average5.92006E-9414078.94053259808.092253400.7134178018.005481.189483092064.35547937864.251510.008197987542.7136640.1608883466.1182E-112
Best3.6969E-1129664.548042164862.82741214.9796249573.507990.1301237351085.58055418446.149941.16188E-055634.8878890.0821183728.7014E-149
STD1.18401E-933249.194475111450.20668824.14552425173.641.627422681914.700654817791.767310.0154449742050.0388860.1082523791.2233E-111
p-value0.3559176840.0001302120.0034589951.93238E-050.0008126370.1941092970.0040444260.0053424870.3292793440.0003225320.024877921NaN
h01111011011NaN
Rank281210115693741
F4Worst6.8610E-603.1812E+019.0999E+017.4864E+014.3456E+015.5896E-036.5534E+003.6673E+018.9361E-064.8080E+018.2925E-021.7208E-54
Average2.5424E-602.5308E+017.0297E+017.1699E+013.1073E+012.8448E-034.4164E+003.2017E+013.8129E-063.7531E+015.8325E-024.3063E-55
Best1.0303E-721.8835E+011.6764E+016.5484E+011.1452E+017.6601E-042.8777E+002.8534E+011.4312E-063.3232E+014.3176E-025.7683E-63
STD3.2744E-605.5607E+003.5899E+014.2435E+001.5349E+012.0889E-031.6731E+003.7908E+003.4653E-067.0710E+001.7231E-028.6015E-55
p-value3.5534E-019.8768E-057.8348E-034.4709E-086.7336E-033.4467E-021.8666E-032.7512E-067.0044E-024.1159E-055.0739E-04NaN
h01111111011NaN
Rank171112846931052
F5Worst1.6654E-024.2719E+044.8760E+012.2741E+074.0217E+054.8697E+011.1119E+033.6622E+044.6787E+012.1306E+034.8918E+014.5067E+01
Average6.2950E-031.4282E+044.8536E+011.4207E+071.4645E+054.8074E+016.9694E+022.6873E+044.6359E+011.1988E+034.8877E+011.9759E+01
Best1.8375E-049.0197E+024.8283E+011.2060E+064.5968E+044.7080E+014.5280E+021.7436E+044.5956E+016.4804E+024.8824E+013.4406E-01
STD7.4526E-031.9583E+042.1777E-019.1797E+061.7082E+057.5301E-012.9226E+028.5652E+033.8440E-016.4881E+024.0631E-021.8644E+01
p-value7.8400E-021.9548E-012.1475E-022.1243E-021.3727E-012.2949E-023.5979E-037.6452E-042.9069E-021.0925E-022.0489E-02NaN
h00110111111NaN
Rank195121147103862
F6Worst7.4586E-034.0782E+012.8398E+004.1424E+032.1428E+034.6882E+001.5872E+003.2536E+025.2899E-014.0330E+018.8853E+005.0952E-02
Average2.0732E-032.4752E+012.5240E+001.7732E+031.5093E+033.5812E+007.6906E-012.1992E+024.3057E-012.8780E+018.5206E+002.1823E-02
Best8.0662E-051.5665E+012.1190E+003.7947E+026.3968E+022.6152E+001.8089E-011.0497E+022.6481E-011.2823E+018.1968E+008.4905E-04
STD3.5956E-031.1063E+013.2233E-011.7914E+036.3437E+029.3390E-016.0022E-019.0506E+011.2080E-011.2291E+013.2283E-012.4930E-02
p-value1.6789E-014.2335E-034.5982E-069.5071E-023.1321E-032.6630E-044.7309E-022.8246E-035.6882E-043.3981E-033.2074E-09NaN
h01101111111NaN
Rank185121164103972
F7Worst8.2152E-047.8054E-016.5132E-032.8601E+003.8227E-018.1132E-031.2213E+011.9458E+003.5460E-031.3288E+004.7987E-031.1384E-04
Average5.5749E-046.9759E-013.3080E-032.6546E+002.5167E-014.0133E-037.1154E+001.3356E+002.4778E-039.4944E-011.8265E-037.3664E-05
Best2.2004E-045.5688E-011.2007E-032.4594E+001.3103E-011.6145E-031.7505E+007.0578E-011.4960E-036.5989E-014.8190E-042.4998E-05
STD2.9741E-049.7832E-022.5103E-031.7191E-011.1463E-012.9558E-034.4619E+005.0639E-018.4777E-043.2501E-011.9976E-034.0096E-05
p-value2.5556E-017.5583E-063.9132E-017.6873E-084.7787E-032.6615E-011.8871E-021.8873E-035.7030E-011.1198E-031.2985E-01NaN
h01011011010NaN
Rank285117612104931
F8Worst−5.4374E+03−1.1297E+04−3.1741E+03−4.5531E+03−6.7763E+03−6.5491E+03−4.5026E+03−9.0295E+03−1.3621E+04−1.0546E+04−4.6339E+03−1.3186E+04
Average−5.8281E+03−1.2227E+04−6.8610E+03−4.9929E+03−7.9585E+03−8.9655E+03−5.4962E+03−9.0590E+03−1.4362E+04−1.2700E+04−5.6555E+03−1.7042E+04
Best−6.0837E+03−1.3238E+04−1.0029E+04−5.2303E+03−9.1043E+03−1.0478E+04−6.0557E+03−9.1475E+03−1.5873E+04−1.4023E+04−6.5546E+03−2.0457E+04
STD2.9576E+029.1368E+023.1291E+033.1477E+029.5197E+021.7889E+037.2763E+025.9039E+011.0288E+031.5246E+037.9497E+023.7487E+03
p-value5.3540E-011.6567E-025.8778E-032.7973E-015.2715E-012.8720E-014.2813E-012.0974E-013.8734E-031.5321E-024.8344E-01NaN
h01100000110NaN
Rank948127611523101
F9Worst0.0000E+001.7042E+020.0000E+002.4160E+023.2010E+021.0984E+012.4610E+021.6151E+020.0000E+001.6151E+020.0000E+000.0000E+00
Average0.0000E+001.3938E+020.0000E+001.5839E+022.4885E+024.4390E+001.8299E+021.4673E+020.0000E+001.2378E+020.0000E+000.0000E+00
Best0.0000E+001.1625E+020.0000E+007.4822E+011.2753E+021.3074E-121.4064E+021.2878E+020.0000E+009.6050E+010.0000E+000.0000E+00
STD0.0000E+002.2879E+010.0000E+006.9738E+018.5115E+015.4063E+004.6890E+011.4200E+010.0000E+002.7639E+010.0000E+000.0000E+00
p-valueNaN1.8591E-05NaN3.9229E-031.1037E-031.5167E-012.3326E-048.3516E-07NaN1.0816E-04NaNNaN
hNaN1NaN11011NaN1NaNNaN
Rank181101261191711
F10Worst8.8818E-167.2668E+007.9936E-152.0531E+018.9559E+004.3052E-092.2928E+001.5690E+013.2863E-141.8274E+018.9706E-148.8818E-16
Average8.8818E-166.3388E+002.6645E-152.0518E+017.2218E+001.7663E-091.9853E+001.3964E+012.9310E-141.3315E+013.0198E-148.8818E-16
Best8.8818E-165.0825E+008.8818E-162.0502E+012.5486E+006.3157E-101.5281E+001.2007E+012.2204E-147.0599E+008.8818E-168.8818E-16
STD0.0000E+001.0504E+003.5527E-151.3307E-023.1222E+001.7071E-093.5009E-011.5522E+005.0243E-155.5342E+004.1873E-140.0000E+00
p-valueNaN1.9647E-053.5592E-017.8486E-203.5922E-038.3967E-022.8134E-051.8957E-062.8538E-052.9642E-032.1106E-01NaN
hNaN1011011110NaN
Rank183129671141051
F11Worst0.0000E+001.4645E+000.0000E+002.0234E+013.6048E+012.5535E-153.9694E-021.2654E+011.9767E-023.9317E+001.9048E+020.0000E+00
Average0.0000E+001.3439E+000.0000E+009.2396E+002.3041E+011.7764E-153.2452E-024.8686E+004.9418E-033.2721E+001.4421E+020.0000E+00
Best0.0000E+001.2505E+000.0000E+001.0859E+001.4278E+018.8818E-161.9305E-021.4139E+000.0000E+002.4109E+006.4878E+010.0000E+00
STD0.0000E+008.8867E-020.0000E+007.9891E+009.2263E+008.4552E-169.0096E-035.2453E+009.8836E-037.4831E-015.5403E+010.0000E+00
p-valueNaN8.6692E-08NaN6.0017E-022.4653E-035.6744E-033.6224E-041.1279E-013.5592E-011.2373E-042.0035E-03NaN
hNaN1NaN01110011NaN
Rank171101146958121
F12Worst1.9117E-052.5588E+011.4664E-014.0491E+076.6827E+022.8083E-013.5885E-013.4507E+011.1554E-021.7816E+019.6573E-018.0067E-03
Average8.4702E-062.2619E+018.4062E-022.0830E+071.8766E+021.9121E-011.6972E-012.9519E+017.4900E-031.1387E+019.2738E-013.8133E-03
Best3.1725E-071.9885E+014.8423E-021.6156E+068.4537E+001.1625E-018.8199E-022.4029E+013.5678E-034.9788E+009.0628E-012.6674E-04
STD8.4354E-062.9640E+004.3658E-021.5959E+073.2131E+027.4155E-021.2719E-014.2896E+004.1085E-036.5611E+002.6314E-023.4784E-03
p-value7.1305E-024.9996E-061.0521E-024.0099E-022.8709E-012.3369E-034.0232E-029.1566E-062.2093E-011.3305E-025.9237E-10NaN
h01110111011NaN
Rank194121165103872
F13Worst4.0331E-051.0818E+022.0088E+001.6061E+083.3433E+052.7364E+001.5053E+004.8113E+022.0736E+008.8261E+015.0058E+001.1317E-01
Average1.9831E-059.1281E+011.6522E+007.5789E+071.2868E+052.4842E+009.7940E-012.3566E+021.5259E+007.5221E+014.9380E+004.5541E-02
Best1.0784E-067.4225E+011.3225E+001.5273E+072.9743E+032.3287E+004.5078E-011.2480E+021.2655E+006.0365E+014.8917E+006.8088E-05
STD2.1054E-051.3883E+013.6834E-016.1269E+071.4529E+051.8175E-015.9857E-011.6546E+023.7101E-011.4243E+015.0444E-025.1091E-02
p-value1.2504E-011.1971E-051.3233E-044.8197E-021.2689E-012.2178E-072.0875E-022.9258E-022.1731E-044.2502E-051.0526E-11NaN
h01110111111NaN
Rank195121163104872
Mean ranking1.9231E+008.0000E+005.0769E+001.1000E+011.0077E+015.3846E+007.2308E+009.5385E+003.3846E+008.0000E+005.9231E+001.3846E+00
Final ranking284121157103861
Table 6. The results of the comparative methods and the proposed HRSA methods for fixed dimensional functions (F14–F23).
Table 6. The results of the comparative methods and the proposed HRSA methods for fixed dimensional functions (F14–F23).
FunMeasureComparative Algorithms
AODMOAWOASCADAGWOPSOALORSAROAAOAHRSA
F14Worst1.2671E+019.9800E-011.0763E+012.9821E+001.9920E+001.2671E+018.8408E+006.9033E+009.9800E-011.2671E+011.2671E+019.9800E-01
Average4.9092E+009.9800E-014.6720E+001.5058E+001.2465E+004.6607E+006.3958E+003.4654E+009.9800E-011.0248E+011.0010E+019.9800E-01
Best1.9920E+009.9800E-019.9800E-019.9846E-019.9800E-019.9800E-019.9800E-019.9800E-019.9800E-012.9821E+002.9821E+009.9800E-01
STD5.1952E+001.2820E-164.6790E+009.8443E-014.9701E-015.4010E+003.7126E+002.6033E+002.2204E-164.8442E+004.7070E+001.2820E-16
p-value1.8345E-018.7691E-031.4881E-011.2260E-021.0122E-021.7440E-012.5362E-014.8667E-028.7691E-038.7691E-039.4612E-01NaN
h01011001110NaN
Rank938547106212111
F15Worst1.6162E-032.0364E-028.5450E-041.5480E-032.0363E-022.0363E-021.0751E-031.5316E-031.6133E-031.2232E-034.8245E-025.4450E-04
Average1.1160E-035.6279E-035.3753E-041.1086E-036.6307E-035.4880E-039.4454E-041.1987E-037.1362E-041.0923E-032.4085E-024.1911E-04
Best6.9512E-045.3740E-043.1440E-045.5412E-041.6554E-033.2074E-047.2990E-047.6864E-043.2839E-046.9969E-044.8440E-043.2524E-04
STD3.8035E-049.8253E-032.5721E-044.9369E-049.1594E-039.9180E-031.4908E-043.1653E-046.0669E-042.6174E-042.1027E-029.1516E-05
p-value1.1888E-023.9415E-014.5309E-029.8187E-012.7425E-014.1222E-014.3345E-017.4953E-013.0403E-019.2164E-017.1625E-02NaN
h10100000000NaN
Rank710261194835121
F16Worst−1.0306E+00−1.0316E+00−1.0316E+00−1.0314E+00−1.0316E+00−1.0316E+00−1.0167E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00
Average−1.0313E+00−1.0316E+00−1.0316E+00−1.0315E+00−1.0316E+00−1.0316E+00−1.0231E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00
Best−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0301E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00
STD4.4362E-047.4564E-159.7513E-099.0048E-051.5128E-054.4567E-086.6134E-031.8769E-131.2820E-160.0000E+001.5246E-071.2820E-16
p-value4.8620E-024.2224E-024.2224E-024.4387E-024.2563E-024.2225E-024.2224E-024.2224E-024.2224E-024.2224E-024.2230E-02NaN
h11111111111NaN
Rank114610971253181
F17Worst3.9953E-013.9789E-013.9791E-014.0439E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-011.0787E+003.9789E-01
Average3.9852E-013.9789E-013.9790E-014.0041E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-016.0895E-013.9789E-01
Best3.9792E-013.9789E-013.9789E-013.9796E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-013.9789E-014.3913E-013.9789E-01
STD7.2792E-045.0805E-141.0205E-052.7695E-032.6877E-089.9580E-070.0000E+004.3341E-130.0000E+001.7764E-153.1333E-010.0000E+00
p-value3.2798E-012.2294E-012.2422E-018.4769E-012.2294E-012.2324E-012.2294E-012.2294E-012.2294E-012.2294E-012.3258E-01NaN
h00000000000NaN
Rank105911781614121
F18Worst3.0717E+003.0000E+003.0022E+003.0003E+003.0000E+008.4000E+013.0000E+003.0000E+003.0089E+013.0000E+009.3087E+013.0000E+00
Average3.0363E+003.0000E+003.0007E+003.0001E+003.0000E+002.3250E+013.0000E+003.0000E+001.0151E+013.0000E+004.8067E+013.0000E+00
Best3.0066E+003.0000E+003.0001E+003.0000E+003.0000E+003.0000E+003.0000E+003.0000E+003.1441E+003.0000E+003.0000E+003.0000E+00
STD3.3175E-023.5171E-139.9674E-041.3095E-041.5141E-104.0500E+012.8320E-155.9994E-131.3294E+015.1279E-165.1876E+011.7578E-15
p-value3.2559E-013.2333E-013.2338E-013.2334E-013.2333E-015.6139E-013.2333E-013.2333E-013.2333E-013.2333E-012.0653E-01NaN
h00000000000NaN
Rank948761125101123
F19Worst−3.85522E+00−3.85628E+00−3.7519E+00−3.85491E+00−3.85626E+00−3.85556E+00−3.85628E+00−3.85628E+00−3.85628E+00−3.3798E+00−3.85415E+00−3.85628E+00
Average−3.85578E+00−3.85628E+00−3.85251E+00−3.85520E+00−3.85627E+00−3.85589E+00−3.85628E+00−3.85628E+00−3.85628E+00−3.5949E+00−3.85450E+00−3.85628E+00
Best−3.85627E+00−3.85628E+00−3.85555E+00−3.85547E+00−3.85628E+00−3.85627E+00−3.85628E+00−3.85628E+00−3.85628E+00−3.7898E+00−3.85523E+00−3.85628E+00
STD4.3471E-031.2362E-094.9029E-022.8013E-037.9323E-053.6209E-030.0000E+005.3811E-133.6260E-161.8733E-014.9600E-030.0000E+00
p-value3.0902E-022.8796E-025.4899E-023.3516E-022.8824E-023.0430E-022.8796E-022.8796E-022.8796E-022.8796E-023.7043E-02NaN
h11011111111NaN
Rank851196714112101
F20Worst−2.9938E+00−3.1768E+00−2.3872E+00−1.6547E+00−3.1527E+00−3.0868E+00−3.2031E+00−3.2013E+00−3.2031E+00−3.2031E+00−2.7968E+00−3.1517E+00
Average−3.0975E+00−3.1830E+00−2.6300E+00−2.4959E+00−3.2132E+00−3.2632E+00−3.2625E+00−3.2617E+00−3.2625E+00−3.2923E+00−2.9562E+00−3.2723E+00
Best−3.1846E+00−3.1954E+00−2.7676E+00−3.1119E+00−3.3191E+00−3.3220E+00−3.3220E+00−3.3220E+00−3.3220E+00−3.3220E+00−3.0427E+00−3.3179E+00
STD9.9666E-028.5867E-031.7319E-016.8763E-017.3171E-021.1761E-016.8643E-026.9618E-026.8643E-025.9446E-021.0898E-018.0742E-02
p-value3.4017E-036.9842E-045.2693E-047.1830E-018.0921E-049.2451E-044.9920E-045.0808E-044.9920E-043.5429E-041.8894E-02NaN
h11101111111NaN
Rank981112734651102
F21Worst−1.0044E+01−2.6829E+00−5.0520E+00−8.7837E-01−2.6304E+00−4.9130E+00−2.6829E+00−5.0552E+00−2.6305E+00−2.6829E+00−1.5765E+00−1.0149E+01
Average−1.0120E+01−6.4180E+00−6.3064E+00−2.7683E+00−4.4716E+00−6.2894E+00−5.1435E+00−7.6156E+00−8.2722E+00−6.4180E+00−2.9887E+00−1.0151E+01
Best−1.0149E+01−1.0153E+01−1.0065E+01−4.8735E+00−5.1008E+00−1.0143E+01−1.0153E+01−1.0153E+01−1.0153E+01−1.0153E+01−4.5023E+00−1.0152E+01
STD5.1114E-024.3130E+002.5056E+002.1882E+001.2277E+002.5702E+003.5220E+002.9302E+003.7612E+004.3130E+001.2935E+001.4880E-03
p-value2.4617E-029.6079E-019.9274E-018.2021E-022.4900E-012.3863E-026.1801E-015.2156E-014.1747E-019.6079E-016.1581E-02NaN
h10000100000NaN
Rank267121089435111
F22Worst−5.0378E+00−2.7519E+00−4.9987E+00−5.2107E-01−1.0188E+01−1.0400E+01−3.7243E+00−2.7519E+00−2.7659E+00−2.7659E+00−1.2534E+00−1.0375E+01
Average−6.9590E+00−6.5774E+00−5.0645E+00−1.4625E+00−1.0307E+01−1.0401E+01−6.0962E+00−5.8531E+00−8.4937E+00−6.8240E+00−2.5652E+00−1.0391E+01
Best−1.0056E+01−1.0403E+01−5.0875E+00−3.5117E+00−1.0398E+01−1.0402E+01−1.0403E+01−1.0403E+01−1.0403E+01−1.0403E+01−4.5544E+00−1.0401E+01
STD2.4015E+004.4173E+004.3892E-021.3783E+009.4078E-029.5577E-042.9465E+003.2335E+003.8185E+004.1511E+001.4088E+001.1408E-02
p-value2.8864E-028.8433E-011.6576E-017.3654E-033.1727E-022.8553E-026.6582E-016.0274E-015.2161E-019.5693E-011.9658E-02NaN
h10011100001NaN
Rank571012318946112
F23Worst−1.0512E+01−4.8806E+00−1.8585E+00−9.4646E-01−2.4217E+00−2.4217E+00−2.8711E+00−2.8066E+00−2.4217E+00−3.85354E+00−1.2013E+00−1.0536E+01
Average−1.0523E+01−6.9426E+00−4.9871E+00−3.2865E+00−4.7987E+00−8.5059E+00−8.6201E+00−8.6040E+00−8.5077E+00−7.1859E+00−4.2224E+00−1.0536E+01
Best−1.0535E+01−9.3471E+00−1.0536E+01−4.6174E+00−1.0528E+01−1.0535E+01−1.0536E+01−1.0536E+01−1.0536E+01−1.0536E+01−7.3080E+00−1.0536E+01
STD1.0420E-022.2857E+003.9644E+001.6939E+003.8764E+004.0562E+003.8326E+003.8649E+004.0573E+003.8688E+002.5257E+004.0454E-11
p-value2.0261E-021.9951E-024.2554E-014.2320E-023.7748E-015.2688E-014.8060E-014.8724E-015.2649E-019.1729E-011.6135E-01NaN
h11010000000NaN
Rank289121063457111
Mean ranking7.2000E+006.0000E+008.1000E+009.6000E+007.3000E+006.7000E+005.4000E+005.7000E+003.7000E+005.4000E+001.0800E+011.4000E+00
Final ranking861011973523121
Table 7. UCI benchmark datasets.
Table 7. UCI benchmark datasets.
DatasetFeatures No.Instances No.Classes No.
Cancer96832
CMC1014733
Glass92147
Iris41503
Seeds72103
Heart132702
Vowels68713
Water131783
Table 8. The comparative results of HRSA and other clustering methods using eight datasets.
Table 8. The comparative results of HRSA and other clustering methods using eight datasets.
DatasetMetricComparative Algorithms
AOPSOGWOAVOAWOARSAROAAOAHRSA
CancerWorst2.9098E+031.6727E+032.9680E+033.4670E+033.5238E+033.5050E+033.5050E+033.5050E+037.1703E+02
Average2.7388E+031.2148E+032.6393E+033.3097E+033.4419E+033.2415E+033.2415E+033.2415E+034.9848E+02
Best2.6050E+036.7650E+022.3894E+033.0439E+033.3555E+032.7880E+032.7880E+032.7880E+032.5294E+02
STD1.2285E+024.1186E+022.7025E+022.1505E+027.5799E+013.0883E+023.0883E+023.0883E+021.9627E+02
p-value9.6175E-032.1793E-051.1166E-026.9577E-011.9649E-011.0000E+001.6244E-071.0000E+00NaN
h11100010NaN
Rank423895551
CMCWorst3.3274E+021.0104E+023.1034E+023.3504E+023.3443E+023.3470E+023.3470E+023.3470E+028.1049E+01
Average3.2908E+028.4904E+013.0460E+023.3428E+023.3407E+023.3332E+023.3332E+023.3332E+026.7316E+01
Best3.2649E+026.0653E+013.0189E+023.3292E+023.3340E+023.3199E+023.3199E+023.3199E+025.0947E+01
STD2.3268E+001.4951E+013.2968E+008.1148E-014.6832E-011.1409E+001.1409E+001.1409E+001.3054E+01
p-value6.3870E-033.0919E-107.7964E-081.6589E-012.1389E-011.0000E+006.1264E-111.0000E+00NaN
h11100010NaN
Rank423985551
GlassWorst3.1433E+011.6129E+012.9980E+013.5149E+013.4978E+013.5062E+014.3268E+003.5062E+013.5062E+01
Average3.0876E+011.2067E+012.9199E+013.4583E+013.3710E+013.4938E+011.3341E+003.4938E+013.4938E+01
Best3.0338E+019.9866E+002.7810E+013.4088E+013.2553E+013.4822E+010.0000E+003.4822E+013.4822E+01
STD5.0092E-012.3606E+008.8269E-014.4374E-019.5919E-011.1264E-011.7931E+001.1264E-011.1264E-01
p-value1.0662E-072.1913E-085.2248E-071.2103E-012.1695E-021.0000E+001.1771E-101.0000E+001.0000E+00
h11101010NaN
Rank42365717NaN
IrisWorst2.1833E+016.5278E+002.1616E+012.4896E+012.4487E+012.4388E+012.4388E+012.4388E+015.5340E+00
Average2.0176E+015.5161E+001.6522E+012.4451E+012.3746E+012.3636E+012.3636E+012.3636E+012.8836E+00
Best1.8980E+014.1415E+001.3271E+012.4156E+012.3303E+012.3056E+012.3056E+012.3056E+011.5958E+00
STD1.1616E+009.9876E-013.0780E+002.9205E-015.3502E-016.2257E-016.2257E-016.2257E-011.7265E+00
p-value3.7448E-045.5402E-109.7090E-042.9211E-027.7219E-011.0000E+006.4135E-091.0000E+00NaN
h11110010NaN
Rank423985551
SeedsWorst4.0905E+011.9643E+013.9253E+015.0452E+015.0731E+015.0011E+015.0011E+015.0011E+011.0401E+01
Average3.9847E+011.3700E+013.7736E+014.9855E+014.9928E+014.8814E+014.8814E+014.8814E+018.5169E+00
Best3.8646E+015.7750E+003.6119E+014.9054E+014.7994E+014.7821E+014.7821E+014.7821E+016.6314E+00
STD8.7573E-015.2011E+001.1579E+005.2647E-011.1100E+009.0900E-019.0900E-019.0900E-011.3364E+00
p-value2.4678E-074.1207E-071.5759E-075.7550E-021.2069E-011.0000E+001.1892E-111.0000E+00NaN
h11100010NaN
Rank423895551
StatlogWorst1.4705E+035.3201E+021.2471E+031.6629E+031.6207E+031.6982E+031.6982E+031.6982E+036.6389E+01
(Heart)Average1.4064E+032.5262E+021.0091E+031.5678E+031.5659E+031.6325E+031.6325E+031.6325E+032.7628E+01
Best1.2522E+030.0000E+008.0036E+021.4391E+031.4899E+031.5087E+031.5087E+031.5087E+030.0000E+00
STD8.8824E+012.1268E+021.5920E+028.0923E+015.2544E+017.4898E+017.4898E+017.4898E+013.1474E+01
p-value2.4418E-037.8363E-074.6791E-052.2569E-011.4195E-011.0000E+007.6147E-111.0000E+00NaN
h11100010NaN
Rank423657771
VowelsWorst1.3952E+021.5133E+011.3594E+021.5347E+021.5231E+021.5336E+021.5336E+021.5336E+021.8987E+01
Average1.3832E+021.0040E+011.3417E+021.5304E+021.5175E+021.5287E+021.5287E+021.5287E+021.5031E+01
Best1.3666E+020.0000E+001.3111E+021.5275E+021.5121E+021.5235E+021.5235E+021.5235E+029.6841E+00
STD1.2751E+006.2258E+001.8819E+002.8123E-014.5754E-013.9920E-013.9920E-013.9920E-014.2281E+00
p-value8.6484E-092.3486E-112.1235E-084.4898E-013.4024E-031.0000E+001.4476E-121.0000E+00NaN
h11101010NaN
Rank413956662
WineWorst3.4751E+033.9940E+032.9327E+034.0307E+033.9606E+033.9940E+033.9940E+033.9940E+031.6141E+03
Average3.2725E+033.9170E+032.5840E+033.9325E+033.8541E+033.9170E+033.9170E+033.9170E+031.2385E+03
Best3.1515E+033.8401E+032.3275E+033.7908E+033.7249E+033.8401E+033.8401E+033.8401E+038.8523E+02
STD1.3674E+026.0335E+012.3210E+029.2329E+011.0214E+026.0335E+016.0335E+016.0335E+012.9581E+02
p-value1.1140E-054.3418E-081.6402E-067.6057E-012.6979E-011.0000E+001.2556E-101.0000E+00NaN
h11100010NaN
Rank4631056661
Mean ranking4.0000E+002.3750E+003.0000E+008.1250E+006.7500E+005.7500E+005.0000E+005.7500E+001.1429E+00
Final ranking423986561
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Almotairi, K.H.; Abualigah, L. Hybrid Reptile Search Algorithm and Remora Optimization Algorithm for Optimization Tasks and Data Clustering. Symmetry 2022, 14, 458. https://doi.org/10.3390/sym14030458

AMA Style

Almotairi KH, Abualigah L. Hybrid Reptile Search Algorithm and Remora Optimization Algorithm for Optimization Tasks and Data Clustering. Symmetry. 2022; 14(3):458. https://doi.org/10.3390/sym14030458

Chicago/Turabian Style

Almotairi, Khaled H., and Laith Abualigah. 2022. "Hybrid Reptile Search Algorithm and Remora Optimization Algorithm for Optimization Tasks and Data Clustering" Symmetry 14, no. 3: 458. https://doi.org/10.3390/sym14030458

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop