Next Article in Journal
Approximating Solutions of Non-Linear Troesch’s Problem via an Efficient Quasi-Linearization Bessel Approach
Next Article in Special Issue
A Learning-Based Hybrid Framework for Dynamic Balancing of Exploration-Exploitation: Combining Regression Analysis and Metaheuristics
Previous Article in Journal
An RFM Model Customizable to Product Catalogues and Marketing Criteria Using Fuzzy Linguistic Models: Case Study of a Retail Business
Previous Article in Special Issue
Carbon Trading Mechanism, Low-Carbon E-Commerce Supply Chain and Sustainable Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Self-Adaptive Cuckoo Search Algorithm Using a Machine Learning Technique

1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
2
Dirección de Tecnologías de Información y Comunicación, Universidad de Valparaíso, Valparaíso 2361864, Chile
3
Escuela de Ingeniería Informática, Universidad de Valparaíso, Valparaíso 2362905, Chile
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(16), 1840; https://doi.org/10.3390/math9161840
Submission received: 27 June 2021 / Revised: 28 July 2021 / Accepted: 30 July 2021 / Published: 4 August 2021
(This article belongs to the Special Issue Mathematical Methods for Operations Research Problems)

Abstract

:
Metaheuristics are intelligent problem-solvers that have been very efficient in solving huge optimization problems for more than two decades. However, the main drawback of these solvers is the need for problem-dependent and complex parameter setting in order to reach good results. This paper presents a new cuckoo search algorithm able to self-adapt its configuration, particularly its population and the abandon probability. The self-tuning process is governed by using machine learning, where cluster analysis is employed to autonomously and properly compute the number of agents needed at each step of the solving process. The goal is to efficiently explore the space of possible solutions while alleviating human effort in parameter configuration. We illustrate interesting experimental results on the well-known set covering problem, where the proposed approach is able to compete against various state-of-the-art algorithms, achieving better results in one single run versus 20 different configurations. In addition, the result obtained is compared with similar hybrid bio-inspired algorithms illustrating interesting results for this proposal.

1. Introduction

Recent studies about bio-inspired procedures to solve complex optimization problems have demonstrated that finding good results and the best performance are laborious tasks, so it is necessary to apply an off-line parameter adjustment on metaheuristics [1,2,3,4,5]. This adjustment is considered an optimization problem itself, and several studies are proposing some solutions to solve that, but it always depends on his static therms [6]. Many of these studies use mathematical ways to change the values of one of each parameter during their execution. In this context, parameters like the population size of metaheuristics are initially set without considering their variation or behaviors. We consider using a machine learning (ML) technique that lets us analyze the population and determine the values of the number of solutions and the abandon probability.
As we can see in [7], machine learning and optimization are two topics of artificial intelligence that are rapidly expanding, having a wide range of computer science applications. Due to the rapid progress in the performance of computing and communication techniques, these two research areas have proliferated and drawn widespread attention in a wide variety of applications [8]. For example, in [9], the authors present different clustering techniques to evaluate the credit risk in a determinate population of Europe. Although both fields belong to different communities, they are fundamentally based on artificial intelligence, and the techniques from ML and optimization frequently interact with each other and themselves to improve their learning and/or search capabilities.
On the other hand, advances in operations research and computer science have brought forward new solution approaches in optimization theory, such as heuristics and metaheuristics. While the former are experience-based procedures, which usually provide good solutions in short computing times, metaheuristics are general templates that can easily be tailored to address a wide range of problems. They have been shown to provide near-optimal solutions in reasonable computing times to problems for which traditional methods are not applicable [10]. Moreover, as we can see in [11], the tendency to use a hybrid method to solve some type of recent problem, such as those that COVID-19 has brought with it, has recently proven its effectiveness. In one of the important studies of these two main topics, we are interested in exploring the integration to use ML into metaheuristics, in terms to enhance any of the characteristics or attributes of those algorithms: the solutions, performance, or time to get the results. In this way, we propose to use an unsupervised machine learning technique that let us learning in the search space of the metaheuristic, exploiting the characteristic of their attributes that let us use to enhance the metaheuristic parameters in an online way: spatial clustering based on noise application density (DBSCAN) is one of those techniques that gathers those characteristics. We propose to use noise clustering to associate with the abandon probability and the solutions clustering result for determining the number of nests of the cuckoo search algorithm (CSA). They are studies that include some hybrid propose to enhance the CSA whit an ML technique, as in [12], when the authors present a Kmean technique to determinate the discrete parameters to improve the CSA. In [13], the contribution is a hybrid method with a sin-cousin algorithm to enhance the search space to be used on the CSA. Other different scenarios, but similar cases are in [14], they present a CuckooVina, a combination of cuckoo search and differential evolution in a conformational search method. Thus, some studies propose a hybridization to enhance CSA, but they do not respond to our search to find some ML technique that allows enriching the self-adaptation capabilities of CSA in the way we present.
To illustrate and prove our approach, we apply an improvement of the cuckoo search algorithm with a self-adaptive capability (SACSADBSCAN). This approach was tested on the set covering problem, whose goal is to cover a range of needs at the lowest cost and is a widely used problem to demonstrate research.
The remainder of the paper is structured as follows: in Section 2, we introduce the theoretical background on the optimization and clustering algorithms that have been integrated into the proposed approach, then, in Section 3, we present the original CSA, and his parameters that need to be set to run, moreover we introduce to the DBSCAN algorithm, how is the characteristic operation of this algorithm, and how we propose to exploit them in the metaheuristics. Following this, in Section 4 we present the integration of DBSCAN on CSA, how parameter values are configured in the execution of our approach. Then, in Section 5 experimental results and discussions are shown. Finally, we conclude and suggest some lines of future research in Section 6.

2. Related Work

As previously mentioned, parameter adjustment of metaheuristics is a complex task and is considered an optimization problem itself, in many cases, this depends on the try-error test to find a good combination of parameters. In this scenario, some studies use different ways, many of those with a mathematical formula to vary the values of the parameters of the CSA.
For example, to determine the step size α and P a parameter of the Cuckoo Search, Ref. [15] uses a mathematical formula that depends on the range of those minimal and maximal values of those parameters and the number of iterations of the algorithm. With that idea, the target is to use a bigger target area, also those neighboring areas. In [16], authors propose a self-adaptive step size in his studies to vary the α parameter according to the fitness in each iteration, applying a mathematical formula to get the value to set. In addition, Ref. [17] vary the α and P a values according to their propose, α changes his value according to the algorithm iterations, meanwhile P a vary randomly depending on the dimension map of the problem. Similar studies are in [18] where Q-Learning is used to set the step size value of the cuckoo search algorithm in a self-adaptive way.
Another study case of adaptive CSA is [19], where the author varies the step size and the abandon probability considering the historical values of them, including their best and worst values to determine the new one. Following with the vary in population, in [20] the authors divide the cuckoo population between making an analysis to them and improve the results.
In another case, such as in [21] use a hybrid algorithm to set the values of their parameters and reduce the population, when the diversity of population decreases, the population is reduced, allowing the algorithm to diversify in subsequent iterations.
As we can see, the machine learning techniques that we have cited up to this point are used to determine a static value in the metaheuristic parameters, which is used as it is the initial value. However, it is used to vary the value of the metaheuristic parameters. During its execution, it is seen minimally in the literature.
In other terms, machine learning has also been applied to the metaheuristic to compact the heuristic space through a clustering procedure [22] and artificial networks [23]. These manuscripts use forecasting and classification. In deep, machine learning integrates with metaheuristics is proposed to forecast different classes of problems, such as electric load [24], economic recessions [25], optimise the low-carbon flexible job shop scheduling problem [26] and other industrial problems [27]. As we mentioned, several dataset groups can be classified by employing machine learning. In this context, we can found image classification problem [28], electronic noise classification of bacterial food bones pathogens [29], urban management [30] and to mention some recently studies. Finally, in [31] the authors use the DBSCAN algorithm to make a binarization strategy and transform the continuous search space to a binary one.
Although machine learning has a big presence in the metaheuristic discipline, its use to set values is quite limited. The use to vary, in an online way, the value of the parameters of the metaheuristic is an area to dig deeper. In this way, we believe that the self-adaptive with DBSCAN, beyond the enhancement of metaheuristics, there seems to be a lot of scope for exploration, using a machine learning technique like DBSCAN with his characteristic we can make an analysis, and thanks to that, change the values of the parameters, is the aim pursued in this work.

3. Theoretical Background

3.1. Cuckoo Search Algorithm

Metaheuristics belong to a class of approximation methods. These are types of higher-level general-purpose algorithms that can be used for a wide range of different types of problems. They have been widely used to solve large-scale combinatorial optimization problems within acceptable computational time, but they do not guarantee optimum solutions [32].
There are a lot of metaheuristics. A full set of all nature-inspired algorithms can be found in [33], and one of them is CSA, which has several study cases. CSA [34] is inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other bird species. The steps of CSA are described below:
  • Each cuckoo lays an egg at a time and drops it into a randomly selected nest.
  • The best nests with high-quality eggs will be carried over to the next generations.
  • The number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability P a [ 0 , 1 ] . In this case, a new random solution is generated.
Every new generation is determinate by Lévy flight [34], that is given by the Equation (2)
x i d ( t + 1 ) = x i d + α L e v y ( β ) i { 1 , . . . , n } d { 1 , . . . , m }
where x i d is the element d of a solution i at iteration t. x i d ( t + 1 ) is a solution in the iteration t + 1 . α > 0 is the step size which should be related to the scales of the problem of interest, the upper ( U b ) and lower bounds ( L b ) that the problem need to be determinated, in this scenario values between 0 and 1.
L e v y u = t β , ( 0 < β < 3 )
The Lévy flight represents a random walk while the random step length is drawn from a Lévy distribution which has an infinite variance with an infinite mean.

3.2. Density Based Spatial Clustering Application with Noise

Density-Based Spatial Clustering of Applications with Noise [35] is a popular data grouping algorithm that uses automated analysis techniques to similar group information together. It can be used to find clusters of any shape in a data collection with noise and outliers. Clusters are dense sections of data space separated by regions with a lower density of points.
The goal is to define dense areas that can be determined by the number of things in close proximity to a place. It is crucial to understand that DBSCAN has two important parameters that are required for work.
  • Epsilon ( ϵ ): Determines how close points in a cluster can be seen to each other.
  • Minimum points (MinPts): The minimal amount of points required to produce a concentrated area.
The underlying premise is that there must be at least a certain number of points in the vicinity of a particular radius for each given group. The ϵ parameter determines the surrounding radius around a given point; every point x in the data set is tagged as a central point with a neighbor above or equal to MinPts; x is a limit point if the number of neighbors is less than MinPts. Finally, if a point is neither a center nor a border, it is referred to as a point of noise or an outlying point.
When users do not know much about the data being modified, this technique proves helpful in metaheuristics. See pseudo-code in Algorithms 1 and 2 to understand how this strategy works.
The computed groups are shown on the right side of Figure 1. Each iteration of the process adjusts the positions until the algorithm converges. It is important to mention that the noise points are those that do not remain in any cluster at the time of iterating the algorithm and we can visualize them as the white points around the clusters identified in the graph (Red, Yellow, Blue and Green), all these noise points are grouped into a single cluster, and in this way all the points will belong to a specific cluster.
Algorithm 1: Cuckoo Search pseudo-code.
Mathematics 09 01840 i001
Algorithm 2: DBSCAN pseudo-code.
Mathematics 09 01840 i002

4. Proposed Approach: Integrating DBSCAN in CSA

In this section, we describe how DBSCAN was integrated on CSA, in addition to all the keys elements for this technique to work. We add a shortcode at the end of each iteration to decide whether intervention is appropriate to implement parameter control intervention and use the DBSCAN algorithm to calculate the values of those parameters (see Algorithm 3).
Algorithm 3: Integration CSA with DBSCAN.
Mathematics 09 01840 i003
In the following sections, we explain some relevant topics that are important to mention to understand our approach. Section 4.1 explains under which criteria the DBSCAN intervening in the metaheuristic to make an analysis on the search space and make the clustering on the solutions. Then, Section 4.2 indicates how the noise cluster is fundamental to determinate the abandon probabilities.

4.1. Free Execution Parameter

We decide to include a variable to control the moment to intervene the parameter values of the CSA to make the metaheuristic maintain its independence of executions and its specific behavior, in this case, we consider it prudent to set it on one hundred iterations of a free run. If these execution values are reached, then the algorithm performs a procedure to update the CSA parameter values, as is described in line 39 of the Algorithm 3.

4.2. Online Parameter Setting

As we mention in the previous section, the update of the values of the parameters in the CSA occurs when the value of the free execution parameter has reached its limit, then the DBSCAN algorithm can be run and used the generated clusters to infer in the parameter values of the metaheuristic. How to associate the parameters of the P a and N e s t is detailed in the following sections.

4.2.1. Probability Abandon Nest

To set the value of the probability of nest abandon, we use the value of the noise point obtained from the DCSCAN execution. That value indicates the points that are excluded from any cluster, so we can associate the number of noise points to make the metaheuristic consider those probabilities to explore new points of the search space. To make this possible, we associate the percentage number of noise points to set the abandon probabilities value. To let the metaheuristic keep the normal execution we use bounds between 10 to 40 percentage. That indicates, that if the noise points are more than 40% then, the P a values are set to 0.4, in the same way, if the noise point is less than 10%, the P a values are set to 0.1. In other cases, the values are set to the percentage values of noise points.

4.2.2. Number of Nests

To determine when varying the number of nests on the CSA, our approach considers keeping in memory the last best fitness value to compare with the new candidate solutions, if the best value of fitness does not vary on the fourth intervention of the DBSCAN, we consider that it is necessary to increase the number of nests, to amplify the search space. So, in this case, the number of nests increases in five on five every time this scenario occurs. On the other hand, if the global best value improves four times consecutively, then the number of nest decrease, eliminating the worst five.

4.3. Exploration Influence

During the tests, we realize that the CSA solution space converges to a single large cluster that considers all possible solutions while evaluating the executions, in some instances, certain clusters appear to have values compressed very close to each other. We deem it entirely appropriate in this scenario to have CSA explore the search space. For this, half of the cluster points are renovated to new random ones, allowing the metaheuristic to diversify the search space. The replacement criteria are according to the following Formula (3)
E x p I n f l = C l u s t e r S o l = 1 n F i t n e s s C l u s t e r S o l n
where C l u s t e r S o l corresponding to all solutions in the current cluster evaluates. E x p l I n f l is evaluated with the best possible solution of the global population, if the absolute value of the difference between both is larger than one, then we renew a half-point on those clusters.

4.4. Set Covering Problem

Many studies use the Set Covering Problem (SCP) to represent real scenarios into the studies, as we can see in the area of management of crew on airlines [36], in the optimization of the location of the emergency buildings [37], manufacturing [38] and production [39]. As we can see, SCP is a classical combinatorial optimization problem. It is belong to the NP-hard class [40] and is formally defined as well: let A = ( a i j ) be a binary matrix with M-rows ( i I = { 1 , , M } ) and N-columns ( j J = { 1 , , N } ), and let C = ( c j ) be a vector representing the cost of each column j, assuming that c j > 0 , j = { 1 , . . . , N } . Then, it observes that a column j covers a row i if a i j = 1 . Therefore, it has:
a i j = 1 , if row i can be covered by column j 0 , otherwise
The SCP entails identifying a group of materials that can be used to address a lot of purposes for the least amount of money. A feasible solution corresponds to a subset of columns in its matrix form, and the demands are associated with rows and regarded as constraints. The goal of the challenge is to find the columns that best cover all of the rows.
The Set Covering Problem identifies a low-cost subset S of columns that covers each row with at least one column from S. The SCP can be expressed using integer programming as follows:
minimize j = 1 n c j x j subject to : j = 1 n a i j x j 1 i I x j { 0 , 1 } j J
Instances: We use 65 instances from Beasley’s OR-library, which are arranged into 11 sets, to evaluate the algorithm’s performance when solving the SCP. To represent the instances, we present on Table 1 the following details: instance group, the number of rows M, number of columns N, the cost range, density (percentage of non-zeroes in the matrix).
Reducing the instance size of SCP: In [41] different pre-processing approaches have been proposed in particular to reduce the size of the SCP, with Column Domination and Column Inclusion being the most effective. These methods are used to accelerate the processing of the algorithm.
Column Domination is the process of removing unnecessary columns from a problem in such a way that the final solution is unaffected.
Steps:
  • All the columns are ordered according to their cost in ascending order.
  • If there are equal cost columns, these are sorted in descending order by the number of rows that the column j covers.
  • Verify if the column j whose rows can be covered by a set of other columns with a cost less than c j (cost of the column j).
  • It is said that column j is dominated and can be eliminated from the problem.
Column Inclusion: when the domination process has terminated, the process of inclusion is performed, which means if a row is covered only by one column, means that there is no best column to cover those rows, which implies, that column must be included in the optimal solution. All of this process will be included to data of instances and let the new solutions satisfying the constraints.

5. Experimental Results

To evaluate the performance of our proposal, we test the instances of the SCP [42], comparing the original CS with different configurations, versus the SACSDBCAN.

5.1. Methodology

To adequately evaluate the performance of metaheuristics, a performance analysis is required [43]. For this work, we compare the supplied best solution of the CSA to the best-known result of the benchmark retrieved from the OR-Library [44]. Figure 2 depicts the procedures involved in doing a thorough examination of the enhanced metaheuristic. We create objectives and recommendations for the experimental design to show that the proposed approach is a viable alternative for determining metaheuristic parameters. Then, as a vital indicator for assessing future results, we evaluate the best value. We use ordinal analysis and statistical testing to evaluate whether a strategy is significantly better in this circumstance. Lastly, we detail the hardware and software aspects that were used to replicate computational experiments, and we present all of the results in tables and graphs.
As a result, we conduct a contrast statistical test for each case, using the Kolmogorov–Smirnov–Lilliefors process [45] to measure sample autonomy and the Mann–Whitney–Wilcoxon [46] test to statistically evaluate the data, in Figure 3 we describe and determinate the organization.
The Kolmogorov-Smirnov-Lilliefors test allows us to assess sample independence by calculating the Z M I N or Z M A X (depending on whether the task is minimization or maximization) obtained from each instance’s 31 executions.
The relative percentage deviation is used to assess the results (RPD). The RPD value computes the difference between the objective value Z m i n and the minimal best-known value Z o p t for each instance in our experiment, and it is determined as follows:
R P D = Z m i n Z o p t Z o p t

5.2. Set Covering Problem Results

Infrastructure: Java 1.8 was used to implement SACSDBSCAN. The Personal Computer (PC) has the common attributes: MacOS with a 2.7 GHz Intel Core i7 CPU and 16 GB of RAM.
Setup variables: The configuration for our suggested approach is shown in Table 2 below.
Sixty-five SCP instances were considered, each of which was run 31 times; the results are presented in Table 3 and Table 4.
Overview: The algorithms are ranked in order of Z m i n achieved. The instances that obtained Z m i n are also displayed.
  • Score
    • CS got 38/65 Z m i n .
    • SACSDBSCAN got 10/65 Z m i n .
  • The two algorithm got Z m i n in the same instances: 5.4–6.4–A.4–B.2–B.3–B.4–B.5–D.3–D.5–NRE.1–NRE.2–NRE.3–NRF.1–NRF.3–NRF.4–NRF.5–NRH.5
As can be seen in the results of the algorithms that solved SCP, we compare the distribution of the samples of each instance using a violin plot, which allows us to observe the entire distribution of the data. We provide and discuss the most difficult instances of each group to create a resume of all the instances below (4.10, 5.10, 6.5, A.5, B.5, C.5, D.5, NRE.5, NRF.5, NRG.5 and NRH.5):
In [47], the authors display the results obtained, the information is detailed, and the configuration that they use. The information is organized as follows: MIN: the minimum value reached, MAX: the maximum value reached, AVG: the average value, BKS: the best-known solution, RPD is determined by Equation (43), and lastly the average of the fitness obtained.
The first method was the standard cuckoo search algorithm with various settings, and the second was SACSDBSCAN, as previously indicated. Table 5, Table 6, Table 7 and Table 8 show the behavior of our proposed algorithm versus the original algorithm. The best results are highlighted with underline and maroon color. For example, in the instance 4.1, the best reached solution by our proposal overcomes than the classical CS algorithm. The same strategy is used in all comparisons.
The distribution of the data in all instances (Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14), shows that the performance of our proposal is better than the traditional the cuckoo search optimizer. Concentrating the largest distribution of results in the optimal values, while in original CS they are visibly distant. For example, in the instance B.5, we can see that the distribution is better in our proposal, but at the same time, reflex the behavior to move the result on the best values in his executions, getting the center of the distribution in the best value quartile. Other instance that shows this behaviour can be observed in C.5. Here, our proposal generates again a large number of optimum results.
The scenario in D.5 and NRE.5, is similar in all of them, the behavior of the ASCSDBSCAN show that reach best results compare to CS. In instance NRF.5 we can see that the behavior of both algorithms obtains a similar figure, reflexing the results obtained from the nature of the sample data. In this scenario, we can rescue results in our proposal, obtains betters solutions. The instance NRG.5 is the only scenario where the results of this instance are at six point of distance to obtain an Z o p t in SACSDBSCAN. Finally, for the instance NRH.5, the behavior of the ASCSDBSCAN is again superior than CS.

5.3. Statistical Test

As previously stated, we offer the following hypotheses in order to determine independence:
-
H 0 : states that Z m i n / Z m a x follows a normal distribution.
-
H 1 : states the opposite.
The test performed has yielded p _ v a l u e lower than 0.05 ; therefore, H 0 cannot be assumed. Now that we know that the samples are independent, and it cannot be assumed that they follow a normal distribution, it is not feasible to use the central limit theorem. Therefore, for evaluating the heterogeneity of samples we use a non-parametric evaluation called Mann–Whitney–Wilcoxon test to compare all the results of the hardest instances we propose the following hypotheses:
-
H 0 : CS is better than SACSDBSCAN.
-
H 1 : states the opposite.
Finally, the statistical contrast test reveals which technique is considerably superior.
The Wilcoxon signed rank test was used to compare SCP on the algorithms techniques for the hardest instances (Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18 and Table 19). Smaller p-values than 0.05 define that H 0 cannot be assumed because the significance level is also set to 0.05 .
To conduct the test run that supports the study, we use a method from the PISA system. We specify all data distributions (each in a file and each data in a line) in this procedure, and the algorithm returns a p-value for the hypotheses.
The following tables show the result of the Mann–Whitney–Wilcoxon test. To understand them, it is necessary to know the following acronyms:
  • SWS = Statistically without significance.
In all the cases, as mentioned above, the p-values reported are less than 0.05, and SWS suggests that they have no statistical significance. So, with this knowledge, in each instance mentioned, we can see the SACSDBSCAN algorithm was better than the original CS.
If we focus on the instances where our proposal improves the result obtained in comparison to the original CS algorithm, we can infer that the solutions achieved are distributed in a centered way on their optimal value, which reflects that the behavior of this algorithm is very positive. This is reflected in the violin Figure 8 or Figure 10 and Figure 14.

5.4. Comparison Results in Similar Hybrid Algorithms

Within the literature, recent studies can be found that use hybrid algorithms that solve the coverage problem [5,48,49,50]. However, to compare a hybrid algorithm that resembles our proposal, we have considered making a comparison of results with hybrid algorithms that work with bio-inspired metaheuristics, improved by ML, and they solve the set covering problem. In this scheme, we have three algorithms. The first one is the crow search algorithm boosted by the DBSCAN method (called CSADBSCAN). The second studied approach was the integration between the crow search algorithm and the Kmean method (CSAKmean). Both hybridizations were proposed by Valdivia et al. in [51]. Finally, we employ an improved version of the cuckoo search algorithm with the Kmean transition algorithm (KMTA), recently proposed by García et al. in [52].
Table 20 and Table 21 present best values reached in CSADBSCAN, CSAKmean and KMTA. Those algorithms implement different strategies to improve metaheuristics with ML. To resume and centering the results of the best values obtained, we add the AVG measure in the final row of each table. Unfortunately, KMTA only reports results of the first of each family instance, so N/R means Not Reported.
Table 20 shows how SACSDBSCAN obtain better results average in comparison with the other algorithms for the instances 4, 5, 6 y A. Table 21 shows how SACSDBSCAN obtain better results average in comparison with CSADBSCAN and CSAKmean in a very wide value to CSADBSCAN, a difference to 1.141 and 0.038 distance with CSAKmean. Only KMTA obtain the best AVG value with the reported best values with 0.018 difference.

6. Conclusions

In this paper, we can conclude that the use of the machine learning technique to make a metaheuristic autonomous parameters setting has the 38/65 min values fitness on the test that we make, in one single configuration over the 20 different configurations, that demonstrates that with our proposed algorithm, it is not necessary make the complex task to find the best parameter setting of the metaheuristic CS, that is, in most of the time, in a try-and-error way. The result of the experiment demonstrates that it is positive to use the DBSCAN algorithm to infer his result and use that information to let us make changes to the values of the parameters. The comparison results with other bio-inspired hybrid algorithms applied on the set covering problem demonstrate that the use of DBSCAN obtains better results on average fitness values in comparison with studies that report all the best results values. The exploration criteria that we use can let the algorithm vary the search space to find another best candidate, as we saw in the box graphs and the distribution of the instances. In addition, the use of the noise points associate with the P a , lets the SACSDBSCAN keep the variety on his behavior and not forget the stochastic factor that characterizes a metaheuristic.
The free execution parameter allows the metaheuristic to maintain its natural behavior across executions. At the time of reaching the freedom parameter, we can analyze the results of the metaheuristics and classify its results space to be able to make its classification and corresponding intervention of the possible candidate solutions, eliminating the worst for possible new better solutions.
As future work, we consider implementing an improvement to the criterion of population increase and decrease using clusterization strategies. In another line of work, we want to implement different machine learning techniques to be able to perform algorithms that allow effective use of the population increase/decrease in metaheuristics, and thus be able to deliver tools that make these algorithms more efficient. In addition, we are considering contributing new works that provide comparisons of this algorithm with other hybrid variants of cuckoo search, and other types of metaheuristics that use ML to improve their performance, different from those presented in this work, used to solve continuous problems.

Author Contributions

Formal analysis, N.C., R.S. and B.C.; Investigation, N.C., R.S., B.C., S.V. and R.O.; Methodology, R.S., R.O. and S.V.; Resources, R.S. and B.C.; Software, N.C., S.V. and R.O.; Validation, B.C. and S.V.; writing—original draft, N.C., S.V. and R.O.; writing—review and editing, N.C., R.S., B.C., S.V. and R.O. All authors have read and agreed to the published version of the manuscript.

Funding

Ricardo Soto is supported by grant CONICYT/FONDECYT/REGULAR/1190129. Broderick Crawford is supported by grant ANID/FONDECYT/REGULAR/1210810. Nicolás Caselli is supported by grant INF-PUCV 2019–2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Abbreviations

Symbol
α Step size to generate the next solution
P a Probability abandon of CSA
β Scalar number between 0 and 3
ϵ Max Distance with each other point
TMax number of Iterations
tCurrent Iteration
L b Lower Bound
U b Upper Bound
X i d ( t ) Solution ith in dimension d at iteration t
Acronyms
MLMachine Learning
CSACuckoo Search Algorithm
SACSADBSCANSelf-Adaptive Cuckoo Search DBSCAN
DBSCANSpatial Clustering Based on Noise Application Density
SCPSet Covering Problem
RPDRelative Percentage Deviation
CSADBSCANCrow Search Algorithm boosted by the DBSCAN method
CSAKmeancrow search algorithm and the Kmean method
KMTAKmean transition algorithm

References

  1. Olivares, R.; Muñoz, F.; Riquelme, F. A multi-objective linear threshold influence spread model solved by swarm intelligence-based methods. Knowl.-Based Syst. 2021, 212, 106623. [Google Scholar] [CrossRef]
  2. Soto, R.; Crawford, B.; Olivares, R.; Carrasco, C.; Rodriguez-Tello, E.; Castro, C.; Paredes, F.; de la Fuente-Mella, H. A reactive population approach on the dolphin echolocation algorithm for solving cell manufacturing systems. Mathematics 2020, 8, 1389. [Google Scholar] [CrossRef]
  3. Taramasco, C.; Olivares, R.; Munoz, R.; Soto, R.; Villar, M.; de Albuquerque, V.H.C. The patient bed assignment problem solved by autonomous bat algorithm. Appl. Soft Comput. 2019, 81, 105484. [Google Scholar] [CrossRef]
  4. Munoz, R.; Olivares, R.; Taramasco, C.; Villarroel, R.; Soto, R.; Alonso-Sánchez, M.F.; Merino, E.; de Albuquerque, V.H.C. A new EEG software that supports emotion recognition by using an autonomous approach. Neural Comput. Appl. 2018, 32, 11111–11127. [Google Scholar] [CrossRef]
  5. Crawford, B.; Soto, R.; Olivares, R.; Embry, G.; Flores, D.; Palma, W.; Castro, C.; Paredes, F.; Rubio, J.M. A binary monkey search algorithm variation for solving the set covering problem. Nat. Comput. 2019, 19, 825–841. [Google Scholar] [CrossRef]
  6. Huang, C.; Li, Y.; Yao, X. A survey of automatic parameter tuning methods for metaheuristics. IEEE Trans. Evol. Comput. 2019, 24, 201–216. [Google Scholar] [CrossRef]
  7. Song, H.; Triguero, I.; Özcan, E. A review on the self and dual interactions between machine learning and optimisation. Prog. Artif. Intell. 2019, 8, 143–165. [Google Scholar] [CrossRef] [Green Version]
  8. Ghorban, F.; Milani, N.; Schugk, D.; Roese-Koerner, L.; Su, Y.; Müller, D.; Kummert, A. Conditional multichannel generative adversarial networks with an application to traffic signs representation learning. Prog. Artif. Intell. 2018, 8, 73–82. [Google Scholar] [CrossRef]
  9. Caruso, G.; Gattone, S.; Fortuna, F.; Di Battista, T. Cluster Analysis for mixed data: An application to credit risk evaluation. Socio-Econ. Plan. Sci. 2021, 73, 100850. [Google Scholar] [CrossRef]
  10. Michalewicz, Z.; Fogel, D.B. How to Solve It: Modern Heuristics; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  11. D’Adamo, I.; González-Sánchez, R.; Medina-Salgado, M.S.; Settembre-Blundo, D. E-Commerce Calls for Cyber-Security and Sustainability: How European Citizens Look for a Trusted Online Environment. Sustainability 2021, 13, 6752. [Google Scholar] [CrossRef]
  12. García, J.; Yepes, V.; Martí, J.V. A Hybrid k-Means Cuckoo Search Algorithm Applied to the Counterfort Retaining Walls Problem. Mathematics 2020, 8, 555. [Google Scholar] [CrossRef]
  13. Rosli, S.J.; Rahim, H.A.; Abdul Rani, K.N.; Ngadiran, R.; Ahmad, R.B.; Yahaya, N.Z.; Abdulmalek, M.; Jusoh, M.; Yasin, M.N.M.; Sabapathy, T.; et al. A Hybrid Modified Method of the Sine Cosine Algorithm Using Latin Hypercube Sampling with the Cuckoo Search Algorithm for Optimization Problems. Electronics 2020, 9, 1786. [Google Scholar] [CrossRef]
  14. Lin, H.; Siu, S.W.I. A Hybrid Cuckoo Search and Differential Evolution Approach to Protein—Ligand Docking. Int. J. Mol. Sci. 2018, 19, 3181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Saeed, S.; Ong, H.C.; Sathasivam, S. Self-Adaptive Single Objective Hybrid Algorithm for Unconstrained and Constrained Test functions: An Application of Optimization Algorithm. Arab. J. Sci. Eng. 2019, 44, 3497–3513. [Google Scholar] [CrossRef]
  16. Thirugnanasambandam, K.; Prakash, S.; Subramanian, V.; Pothula, S.; Thirumal, V. Reinforced cuckoo search algorithm-based multimodal optimization. Appl. Intell. 2019, 49, 2059–2083. [Google Scholar] [CrossRef]
  17. Dhabal, S.; Venkateswaran, P. An Improved Global-Best-Guided Cuckoo Search Algorithm for Multiplierless Design of Two-Dimensional IIR Filters. Circuits Syst. Signal Process. 2019, 38, 805–826. [Google Scholar] [CrossRef]
  18. Li, J.; Xiao, D.D.; Lei, H.; Zhang, T.; Tian, T. Using Cuckoo Search Algorithm with Q-Learning and Genetic Operation to Solve the Problem of Logistics Distribution Center Location. Mathematics 2020, 8, 149. [Google Scholar] [CrossRef] [Green Version]
  19. Jaballah, A.; Meddeb, A. A new variant of cuckoo search algorithm with self adaptive parameters to solve complex RFID network planning problem. Wirel. Netw. 2019, 25, 1585–1604. [Google Scholar] [CrossRef]
  20. Ma, H.S.; Li, S.X.; Li, S.F.; LV, Z.N.; Wang, J.S. An Improved dynamic self-adaption cuckoo search algorithm based on collaboration between subpopulations. Neural Comput. Appl. 2019, 31, 1375–1389. [Google Scholar] [CrossRef]
  21. Mlakar, U.; Fister, I.; Fister, I. Hybrid self-adaptive cuckoo search for global optimization. Swarm Evol. Comput. 2016, 29, 47–72. [Google Scholar] [CrossRef]
  22. Senjyu, T.; Saber, A.; Miyagi, T.; Shimabukuro, K.; Urasaki, N.; Funabashi, T. Fast technique for unit commitment by genetic algorithm based on unit clustering. IEE Proc.-Gener. Transm. Distrib. 2005, 152, 705–713. [Google Scholar] [CrossRef]
  23. Lee, C.; Gen, M.; Kuo, W. Reliability optimization design using a hybridized genetic algorithm with a neural-network technique. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2001, 84, 627–637. [Google Scholar]
  24. Bouktif, S.; Fiaz, A.; Ouni, A.; Serhani, M.A. Multi-Sequence LSTM-RNN Deep Learning and Metaheuristics for Electric Load Forecasting. Energies 2020, 13, 391. [Google Scholar] [CrossRef] [Green Version]
  25. Cicceri, G.; Inserra, G.; Limosani, M. A Machine Learning Approach to Forecast Economic Recessions—An Italian Case Study. Mathematics 2020, 8, 241. [Google Scholar] [CrossRef] [Green Version]
  26. Luan, F.; Cai, Z.; Wu, S.; Liu, S.Q.; He, Y. Optimizing the Low-Carbon Flexible Job Shop Scheduling Problem with Discrete Whale Optimization Algorithm. Mathematics 2019, 7, 688. [Google Scholar] [CrossRef] [Green Version]
  27. Ly, H.B.; Le, T.T.; Le, L.M.; Tran, V.Q.; Le, V.M.; Vu, H.L.T.; Nguyen, Q.H.; Pham, B.T. Development of Hybrid Machine Learning Models for Predicting the Critical Buckling Load of I-Shaped Cellular Beams. Appl. Sci. 2019, 9, 5458. [Google Scholar] [CrossRef] [Green Version]
  28. Korytkowski, M.; Senkerik, R.; Scherer, M.M.; Angryk, R.A.; Kordos, M.; Siwocha, A. Efficient Image Retrieval by Fuzzy Rules from Boosting and Metaheuristic. J. Artif. Intell. Soft Comput. Res. 2020, 10, 57–69. [Google Scholar] [CrossRef] [Green Version]
  29. Hoang, N.D. Image processing based automatic recognition of asphalt pavement patch using a metaheuristic optimized machine learning approach. Adv. Eng. Inform. 2019, 40, 110–120. [Google Scholar] [CrossRef]
  30. Bui, Q.T.; Van, M.P.; Hang, N.T.T.; Nguyen, Q.H.; Linh, N.X.; Ha, P.M.; Tuan, T.A.; Cu, P.V. Hybrid model to optimize object-based land cover classification by meta-heuristic algorithm: An example for supporting urban management in Ha Noi, Viet Nam. Int. J. Digit. Earth 2019, 12, 1118–1132. [Google Scholar] [CrossRef]
  31. Valdivia, S.; Soto, R.; Crawford, B.; Caselli, N.; Paredes, F.; Castro, C.; Olivares, R. Clustering-Based Binarization Methods Applied to the Crow Search Algorithm for 0/1 Combinatorial Problems. Mathematics 2020, 8, 70. [Google Scholar] [CrossRef]
  32. Lewis, R. A survey of metaheuristic-based techniques for University Timetabling problems. OR Spectr. 2008, 30, 167–190. [Google Scholar] [CrossRef] [Green Version]
  33. Tzanetos, A.; Fister, I.; Dounias, G. A comprehensive database of Nature-Inspired Algorithms. Data Brief 2020, 31, 105792. [Google Scholar] [CrossRef]
  34. Yang, X.S.; Deb, S. Cuckoo Search via Levy Flights. Res. Gate 2009, 210–214. [Google Scholar] [CrossRef]
  35. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise; AAAI Press: Portland, OR, USA, 1996; pp. 226–231. [Google Scholar]
  36. Smith, B.M. IMPACS—A Bus Crew Scheduling System Using Integer Programming. Math. Program. 1988, 42, 181–187. [Google Scholar] [CrossRef]
  37. Toregas, C.; Swain, R.; ReVelle, C.; Bergman, L. The Location of Emergency Service Facilities. Oper. Res. 1971, 19, 1363–1373. [Google Scholar] [CrossRef]
  38. Foster, B.A.; Ryan, D.M. An Integer Programming Approach to the Vehicle Scheduling Problem. J. Oper. Res. Soc. 1976, 27, 367–384. [Google Scholar] [CrossRef]
  39. Vasko, F.J.; Wolf, F.E.; Stott, K.L. A set covering approach to metallurgical grade assignment. Eur. J. Oper. Res. 1989, 38, 27–34. [Google Scholar] [CrossRef]
  40. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; W. H. Freeman & Co.: New York, NY, USA, 1979. [Google Scholar]
  41. Caprara, A.; Toth, P.; Fischetti, M. Algorithms for the set covering problem. Ann. Oper. Res. 2000, 98, 353–371. [Google Scholar] [CrossRef]
  42. Gass, S.; Fu, M. Set-covering Problem. In Encyclopedia of Operations Research and Management Science; Springer: Cham, Swizterland, 2013; p. 1393. [Google Scholar]
  43. Bartz-Beielstein, T.; Preuss, M. Experimental research in evolutionary computation. In Proceedings of the 9th Annual Conference Companion on Genetic and Evolutionary Computation, London, UK, 7–11 July 2007; pp. 3001–3020. [Google Scholar]
  44. Beasley, J. OR-Library. 1990. Available online: https://goo.gl/lO1UQ6 (accessed on 3 August 2021).
  45. Lilliefors, H. On the Kolmogorov-Smirnov Test for Normality with Mean and Variance Unknown. J. Am. Stat. Assoc. 1967, 62, 399–402. [Google Scholar] [CrossRef]
  46. Mann, H.; Donald, W. On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
  47. Soto, R.; Crawford, B.; Olivares, R.; Niklander, S.; Johnson, F.; Paredes, F.; Olguín, E. Online control of enumeration strategies via bat algorithm and black hole optimization. Nat. Comput. 2016, 16, 241–257. [Google Scholar] [CrossRef]
  48. Castillo, M.; Soto, R.; Crawford, B.; Castro, C.; Olivares, R. A Knowledge-Based Hybrid Approach on Particle Swarm Optimization Using Hidden Markov Models. Mathematics 2021, 9, 1417. [Google Scholar] [CrossRef]
  49. Soto, R.; Crawford, B.; Olivares, R.; Taramasco, C.; Figueroa, I.; Gómez, Á.; Castro, C.; Paredes, F. Adaptive Black Hole Algorithm for Solving the Set Covering Problem. Math. Probl. Eng. 2018, 2018, 1–23. [Google Scholar] [CrossRef]
  50. Crawford, B.; Soto, R.; Olivares, R.; Riquelme, L.; Astorga, G.; Johnson, F.; Cortes, E.; Castro, C.; Paredes, F. A self-adaptive biogeography-based algorithm to solve the set covering problem. RAIRO-Oper. Res. 2019, 53, 1033–1059. [Google Scholar] [CrossRef]
  51. Valdivia, S.; Crawford, B.; Soto, R.; Lemus-Romani, J.; Astorga, G.; Misra, S.; Salas-Fernández, A.; Rubio, J.M. Bridges Reinforcement Through Conversion of Tied-Arch Using Crow Search Algorithm. In Computational Science and Its Applications—ICCSA 2019; Springer International Publishing: Cham, Switzerland, 2019; pp. 525–535. [Google Scholar] [CrossRef]
  52. García, J.; Moraga, P.; Valenzuela, M.; Crawford, B.; Soto, R.; Pinto, H.; Peña, A.; Altimiras, F.; Astorga, G. A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems. Comput. Intell. Neurosci. 2019, 2019, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Application of the DBSCAN algorithm to the solution space as an example of clustering.
Figure 1. Application of the DBSCAN algorithm to the solution space as an example of clustering.
Mathematics 09 01840 g001
Figure 2. Evaluation stages to determine the performance of an metaheuristic.
Figure 2. Evaluation stages to determine the performance of an metaheuristic.
Mathematics 09 01840 g002
Figure 3. Statistical significance test.
Figure 3. Statistical significance test.
Mathematics 09 01840 g003
Figure 4. Instance 4.10 distribution.
Figure 4. Instance 4.10 distribution.
Mathematics 09 01840 g004
Figure 5. Instance 5.10 distribution.
Figure 5. Instance 5.10 distribution.
Mathematics 09 01840 g005
Figure 6. Instance 6.5 distribution.
Figure 6. Instance 6.5 distribution.
Mathematics 09 01840 g006
Figure 7. Instance A.5 distribution.
Figure 7. Instance A.5 distribution.
Mathematics 09 01840 g007
Figure 8. Instance B.5 distribution.
Figure 8. Instance B.5 distribution.
Mathematics 09 01840 g008
Figure 9. Instance C.5 distribution.
Figure 9. Instance C.5 distribution.
Mathematics 09 01840 g009
Figure 10. Instance D.5 distribution.
Figure 10. Instance D.5 distribution.
Mathematics 09 01840 g010
Figure 11. Instance NRE.5 distribution.
Figure 11. Instance NRE.5 distribution.
Mathematics 09 01840 g011
Figure 12. Instance NRF.5 distribution.
Figure 12. Instance NRF.5 distribution.
Mathematics 09 01840 g012
Figure 13. Instance NRG.5 distribution.
Figure 13. Instance NRG.5 distribution.
Mathematics 09 01840 g013
Figure 14. Instance NRH.5 distribution.
Figure 14. Instance NRH.5 distribution.
Mathematics 09 01840 g014
Table 1. Instances taken from the Beasley’s OR-Library.
Table 1. Instances taken from the Beasley’s OR-Library.
Instance GroupMNCost RangeDensity (%)Best Known
42001000[1, 100]2Known
52002000[1, 100]2Known
62001000[1, 100]5Known
A3003000[1, 100]2Known
B3003000[1, 100]5Known
C4004000[1, 100]2Known
D4004000[1, 100]5Known
NRE5005000[1, 100]10Unknown
(except NRE.1)
NRF5005000[1, 100]20Unknown
(except NRF.1)
NRG100010,000[1, 100]2Unknown
(except NRG.1)
NRH100010,000[1, 100]5Unknown
Table 2. SACSDBSCAN Parameters for SCP.
Table 2. SACSDBSCAN Parameters for SCP.
Population InitialAbandon Probability Pa α Max Iterations L b and U b
100.1–0.40.0150000 and 1
Table 3. Results of SACSDBSCAN for instance 4, 5, 6 and A.
Table 3. Results of SACSDBSCAN for instance 4, 5, 6 and A.
InstanceSACSDBSCAN
Nest = 10 to 50
Pa = 0.1 to 0.45
BKSFitRDPAVGMINMAXSTD
4.14294290.004294294300.352
4.25125120.005135125181.552
4.35145160.005175165201.407
4.44944940.004954944970.775
4.55125120.005125125140.828
4.65605600.005605605641.121
4.74304300.004304304330.799
4.84924920.004954924992.640
4.96416410.006466416533.680
4.105145140.005145145160.743
5.12532530.002532532540.516
5.23023030.003063033123.355
5.32262260.002272262291.345
5.42422420.002432422451.234
5.52112110.002112112120.458
5.62132130.002132132130.000
5.72932930.002932932960.816
5.82882880.002882882890.258
5.92792790.002802792810.632
5.102652650.002662652681.047
6.11381380.001411381441.807
6.21461460.001481461501.598
6.31451450.001481451481.060
6.41311310.001311311330.737
6.51611610.001631611671.751
A.12532540.002562542591.234
A.22522540.012562542591.502
A.32322320.002342322361.060
A.42342350.002382352422.586
A.52362360.002372362380.640
Table 4. Results of SACSDBSCAN for instance B, C, D, NRE, NRF, NRG and NRH.
Table 4. Results of SACSDBSCAN for instance B, C, D, NRE, NRF, NRG and NRH.
InstanceSACSDBSCAN
Nest = 10 to 50
Pa = 0.1 to 0.45
BKSFitRDPAVGMINMAXSTD
B.169730.057673792.610
B.276760.008076873.218
B.380800.008480882.673
B.479790.008279882.560
B.572720.007372781.710
C.12272270.002292272341.993
C.22192190.002212192251.685
C.32432430.002462432502.131
C.42192190.002212192241.309
C.52152150.002162152170.799
D.160610.026461661.414
D.266700.067070700.000
D.372760.057976821.859
D.462630.026663671.486
D.561630.036463660.910
NRE.129290.003029300.516
NRE.230310.033231340.862
NRE.327280.042928310.976
NRE.428300.073130330.834
NRE.528290.032929300.516
NRF.114150.071515160.458
NRF.215150.001615170.458
NRF.314160.131716170.507
NRF.414150.071515160.389
NRF.515150.001515160.258
NRG.11761880.061941881972.789
NRG.21541600.041651601672.282
NRG.31661800.081821801830.862
NRG.41681770.051831771862.728
NRG.51681810.071831811840.799
NRH.163690.097169720.976
NRH.263660.056766670.414
NRH.359660.116766680.738
NRH.463630.006463661.528
NRH.555590.076059610.707
Table 5. Comparison results for set of groups 4, 5, 6, A. SACSDBSCAN v/s CS Nest = 20, 30.
Table 5. Comparison results for set of groups 4, 5, 6, A. SACSDBSCAN v/s CS Nest = 20, 30.
InstanceBKSSACSDBSCANCSCS
Nest = 10 to 50Nest = 20Nest = 30
Pa = 0.1 to 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45
FitRPDAVGFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPD
4.1429 429 0 4294684769.094714769.794714769.794684769.094684759.094704769.564704769.564704769.5647447710.494714769.79
4.2512 512 0 51360461217.9760861318.7559761416.6059761316.6059761316.6059761316.6060661218.3659761316.6059761216.6059761316.60
4.3516 516 0 51757858612.4557558711.8757858512.4557558611.8757558511.8757858512.4557558511.8756858410.5157558611.8757558411.87
4.4494494049556858114.9856058113.3657158115.5957158115.5956058013.3656057913.3656057913.3656858014.9856058113.3656857914.98
4.5512 512 0 51257561512.3059261815.6360161917.3859261715.6359461616.0259161315.4359461416.0259661716.4159261515.6359862016.80
4.6560 560 0 56064365314.8263765113.7563765213.7563765213.7564365214.8263765113.7563765013.7564365214.8263765113.7563765113.75
4.7430 430 0 43051852720.4751852720.4751852720.4751852620.4751852620.4751852420.4751852620.4751452619.5351552519.7751852620.47
4.8492 492 0 49555255812.2054655710.9854755711.1855255812.2055255812.2054655510.9854655510.9854455710.5754655610.9855055611.79
4.9641 641 0 64676378819.0376378719.0376378819.0376378519.0376378819.0376278518.8875278217.3276378619.0376378719.0376378719.03
4.10514 514 0 51458860814.4058860914.4058860814.4058860714.4058860814.4058860914.4058860914.4058860714.4058860714.4059560815.76
5.1253 253 0 25330331119.7630531020.5530331019.7630331119.7630531120.5530030918.5830231019.3730331019.7630330919.7630331019.76
5.2302 303 0.003 30637438223.8437438323.8437438223.8437038322.5237438123.8436938322.1937038222.5237438223.8436838321.8537438223.84
5.3226 226 0 22725726213.7225926314.6025726213.7225726213.7225726313.7225726213.7225926314.6025726213.7225726313.7225726213.72
5.4242 242 0 24327127511.9826927511.1626827410.7427127411.9827127511.9827027411.5726827410.7427127511.9827127411.9826827410.74
5.5211 211 0 21125425820.3825726021.8025425820.3825625921.3325225919.4325225819.4325225819.4325425820.3825125918.9625225819.43
5.6213 213 0 21325626820.1926026922.0725926821.6026026822.0725626820.1925626720.1925626720.1926526924.4125626720.1926026922.07
5.7293 293 0 29334435217.4134535217.7534135216.3834435217.4134435217.4134135016.3834235116.7234335117.0634335117.0634135116.38
5.8288 288 0 28835536523.2635936524.6535536423.2635736523.9635936524.6535536423.2635936524.6535536323.2635536323.2635536423.26
5.9279 279 0 28033835621.1534635624.0134035621.8633835721.1534335622.9433835521.1533835521.1533835521.1534335622.9433835621.15
5.10265 265 0 26630431014.7230631015.4730731015.8530631015.4730831016.2330531015.0930531015.0930631015.4730531015.0930631015.47
6.1138 138 0 14116917622.4617317725.3616917622.4616917622.4616917622.4616917622.4616917622.4616917722.4616917522.4616917622.46
6.2146 146 0 14816817515.0716817415.0716817615.0716817515.0716817515.0716817415.0716817315.0716817415.0716817415.0716817415.07
6.3145 145 0 14816617514.4816917616.5516917616.5517117617.9316917616.5516617514.4816917416.5516917516.5516617414.4816617414.48
6.4131 131 0 1311421498.401421498.4014714912.2114714912.211421498.401421498.4014514910.6914614911.451421498.401421498.40
6.5161 161 0 16319620521.7419520521.1220220625.4719620521.7419620521.7420020524.2219920423.6019620421.7419620421.7419820422.98
A.1253 254 0.004 2562772799.492742798.302742798.302742798.302772799.492742788.302772799.492742788.302742788.302742798.30
A.2252 254 0.01 25631531825.0031031723.0231031823.0231031723.0231331724.2131431724.6031031623.0231431724.6031231823.8131131723.41
A.3232 232 0 23426326713.3625626610.3426326613.3625826611.2126326713.3625826611.2126126612.5025826611.2126326613.3626226612.93
A.4234 235 0.004 23827427717.0927327816.6727227716.2427427717.0927127715.8127127615.8127327716.6727527717.5227127715.8127127615.81
A.5236 236 0 23727127614.8327127614.8326627612.7126927613.9827127614.8326727513.1426627412.7127127614.8326727513.1427127514.83
Table 6. Comparison results for set of groups B, C, D, NRE, NRF, NRG, NRH. SACSDBSCAN v/s CS Nest = 20, 30.
Table 6. Comparison results for set of groups B, C, D, NRE, NRF, NRG, NRH. SACSDBSCAN v/s CS Nest = 20, 30.
InstanceBKSSACSDBSCANCSCS
Nest = 10 to 50Nest = 20Nest = 30
Pa = 0.1 to 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45
FitRPDAVGFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPD
B.169 73 0.04 76889127.54889127.54899128.99879126.09889227.54889127.54899128.99889127.54899228.99879226.09
B.276 76 0 80969826.32959925.00969926.32949923.68969926.32949823.68959825.00949923.68959925.00959925.00
B.380 80 0 84919313.75919313.75919313.75919313.75919313.75919213.75909212.50919313.75919313.75919313.75
B.479 79 0 8210210729.1110210729.1110110727.8510410831.6510410831.6510010626.5810110727.8510010726.5810210729.1110210629.11
B.572 72 0 73868919.44879020.83869019.44878920.83859018.06868919.44868919.44888922.22858918.06838915.28
C.1227 227 0 22926727017.6226426916.3026727017.6226727017.6226526916.7426526916.7426426816.3026727017.6226426916.3026526916.74
C.2219 219 0 22126627321.4626827322.3726727321.9226727421.9226627221.4626827222.3726627221.4626727221.9226627121.4626927222.83
C.3243 243 0 24630831326.7530731326.3430831426.7530631325.9330931427.1630631225.9330931327.1630631325.9330631325.9330531325.51
C.4219 219 0 22126326720.0926426620.5526226719.6326226619.6326326720.0926226619.6326226619.6326326620.0926326620.0926326720.09
C.5215 215 0 21624825615.3524825615.3525025616.2824825715.3525125616.7425025616.2825125516.7424825515.3524825515.3525125616.74
D.160 61 0.02 64707216.67707216.67707316.67707316.67717318.33717218.33707216.67707216.67707216.67707216.67
D.266 70 0.06 70798019.70798019.70788018.18798119.70798119.70788018.18778016.67798019.70798019.70798019.70
D.372 76 0.05 79868819.44878920.83878920.83878920.83858818.06868819.44868819.44868819.44848816.67878920.83
D.462 63 0.02 66727316.13717214.52707212.90697311.29717314.52697211.29707212.90717214.52707212.90707312.90
D.561 63 0.03 64707114.75707114.75717216.39717216.39707214.75707114.75707114.75707114.75707114.75707214.75
NRE.129 29 0 3031336.90323310.3431336.90323310.34323310.34323310.34323310.3431326.90323310.34323310.34
NRE.230 31 0.03 32373823.33353816.67363820373823.33363820363720363820373823.33363820363820
NRE.327 28 0.04 29353629.63353629.63343625.93343625.93343625.93353629.63343625.93353629.63353629.63353629.63
NRE.428 30 0.07 31343521.43343521.43353525.00343521.43353625.00343521.43343521.43343521.43343521.43343521.43
NRE.528 29 0.03 29333517.86333517.86343521.43333517.86333517.86323514.29323514.29333517.86343521.43323514.29
NRF.114 15 0.07 15181828.57171821.43181828.57181828.57181828.57181828.57181828.57181828.57181828.57171821.43
NRF.215 15 0 16181920181920181920181920181920181920181920181920181920181920
NRF.314 16 0.13 17202142.86192135.71202142.86202142.86202142.86192135.71202142.86192135.71192135.71192135.71
NRF.414 15 0.07 15181928.57192035.71192035.71191935.71192035.71181928.57181928.57181928.57181928.57181928.57
NRF.515 15 0 1516176.6716176.6716176.6716176.6716176.6716166.6716176.6716166.6716176.6716176.67
NRG.1176 188 0.06 19423023530.6822823529.5523023530.6823223531.8223023630.6822923430.1122823629.5522923530.1122823529.5522923530.11
NRG.2154 160 0.04 16518719221.4318919222.7318719221.4318719221.4318819322.0818719121.4318519120.1318819122.0818719221.4318819222.08
NRG.3166 180 0.08 18219820019.2819820019.2819820019.2819720018.6719619918.0719820019.2819819919.2819619918.0719820019.2819719918.67
NRG.4168 177 0.05 18321622028.5721322126.7921722129.1721622128.5721522227.9821622028.5721421927.3821622028.5721421927.3821722129.17
NRG.5168 181 0.07 18321922330.3621822429.7621822329.7622022430.9521222426.1921922330.3621822329.7622122431.5522022430.9521622428.57
NRH.163 69 0.09 71828430.16828430.16838531.75828530.16838531.75828530.16828530.16838431.75828530.16828530.16
NRH.263 66 0.05 67798225.40798225.40778222.22808226.98798225.40788223.81798125.40788223.81798225.40808226.98
NRH.359 66 0.11 67747725.42767728.81767728.81767728.81757727.12757727.12757727.12757727.12757727.12757727.12
NRH.463 63 0 64747617.46747617.46757619.05727614.29747617.46747617.46727514.29747617.46747617.46747617.46
NRH.555 59 0.07 60676921.82676921.82676921.82676921.82676921.82676921.82676921.82676921.82666920676921.82
Table 7. Comparison results for set of groups 4, 5, 6, A. SACSDBSCAN v/s CS Nest = 40, 50.
Table 7. Comparison results for set of groups 4, 5, 6, A. SACSDBSCAN v/s CS Nest = 40, 50.
InstanceBKSSACSDBSCANCSCS
Nest = 10 to 50Nest = 40Nest = 50
Pa = 0.1 to 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45
FitRPDAVGFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPD
4.1429 429 0 4294704759.564714769.794674768.864714769.794694759.324704759.564704759.564704759.564714769.794704769.56
4.2512 512 0 51359761116.6059761116.6059761116.6059761116.6059761216.6060261117.5859761116.6059761216.6060661318.3659761116.60
4.3516 516 0 51757858512.4557558511.8757558511.8757558411.8756858510.5157558411.8757458411.6757558411.8757558311.875655849.92
4.4494 494 0 49556757914.7856857914.9856858014.9856058113.3656957915.1856057713.3657158215.5956057813.3656857914.9856057813.36
4.5512 512 0 51257561612.3057561212.3058961615.0458961515.0459461616.0259261415.6359861716.8059261515.6359261615.6359261915.63
4.6560 560 0 56063965114.1163965114.1162964912.3263965214.1163765013.7563765013.7563764913.7563965114.1163765013.7563765013.75
4.7430 430 0 43051752420.2351552519.7751552419.7750952518.3751852520.4751852420.475165252051752320.2351852520.4751152418.84
4.8492 492 0 49554355510.3755255612.2054755611.1854655510.985395569.5554655610.9854655610.9854655510.9854655510.9854655510.98
4.9641 641 0 64676378419.0376378519.0376378619.0376378419.0376378019.0376078118.5676378719.0375378517.4776078518.5676178418.72
4.10514 514 0 51458860814.4058860814.4059660615.9558860814.4058860514.4058860714.4059560715.7658860514.4058860714.4058860514.40
5.1253 253 0 25330331019.7630531020.5530330919.7630331019.7630330919.7630130918.9729930918.1830331019.7630330919.7629930918.18
5.2302 303 0.003 30637438123.8437438123.8437338323.5137138122.8537138122.8536938122.1936938122.1937038222.5237338023.5137138122.85
5.3226 226 0 22725726213.7225726213.7225726213.7225726213.7225726213.7225726213.7225726213.7225726113.7225626213.2725926214.60
5.4242 242 0 24327027411.5726827410.7426827410.7426827410.7426827310.7427127411.9826827310.7427027511.5727027411.5727027411.57
5.5211 211 0 21125425920.3825225819.4325225919.4325525920.8525525920.8525225819.4325525920.8525225819.4325325719.9125225819.43
5.6213 213 0 21326526924.4126026822.0725626820.1926526824.4125626820.1925626720.1926026822.0726026922.0725926821.6025626820.19
5.7293 293 0 29334135116.3834135016.3834135016.3834135116.3834135116.3834134916.3834635218.0934135116.3834335017.0634135016.38
5.8288 288 0 28835536323.2635536123.2635636423.6135536423.2635736423.9635536323.2635536223.2635436322.9235236222.2235936324.65
5.9279 279 0 28033835421.1534135522.2233835421.1534335422.9432635616.8534635424.0133735620.7933835521.1534335622.9433835521.15
5.10265 265 0 26630530915.0930631015.4730630915.4730531015.0930631015.4730330914.3430230913.9630530915.0930231013.9630530915.09
6.1138 138 0 14116517619.5716817421.7416517419.5716717521.0116917522.4616917622.4616217517.3916917522.4616917522.4616717421.01
6.2146 146 0 14816817515.0716817415.0716817415.0716817315.0716817515.0716817215.0716817215.0716817415.0716817215.0716817215.07
6.3145 145 0 14816617414.4816517413.7916917516.5516917516.5517117517.9316817315.8616617314.4816917416.5516817315.8616917416.55
6.4131 131 0 1311421488.401421498.4014514910.691421498.4014614911.451421488.401421498.4014814912.981401486.8714714912.21
6.5161 161 0 16319620321.7419620421.7419920423.6019620421.7419620521.7419820422.9819620421.7419620321.7419920423.6019620421.74
A.1253 254 0.004 2562732787.912742788.302782799.882732787.912732787.912772799.492742788.302732787.912742788.302742788.30
A.2252 254 0.01 25630831722.2230731621.8331331724.2131031623.0231231723.8131031623.0231131623.4131031623.0231031623.0231031623.02
A.3232 232 0 23426326613.3626426613.7925826611.2125826611.2125826511.2125826611.2126126612.5026426613.7926226612.9325826611.21
A.4234 235 0.004 23827227616.2427127615.8127427717.0927327616.6727327716.6727127615.8127127615.8127427717.0927227616.2427127615.81
A.5236 236 0 23726727513.1426927513.9827027414.4127127414.8327027514.4127127414.8326927413.9827127514.8326727413.1426627412.71
Table 8. Comparison results for set of groups B, C, D, NRE, NRF, NRG, NRH. SACSDBSCAN v/s CS Nest = 40, 50.
Table 8. Comparison results for set of groups B, C, D, NRE, NRF, NRG, NRH. SACSDBSCAN v/s CS Nest = 40, 50.
InstanceBKSSACSDBSCANCSCS
Nest = 10 to 50Nest = 40Nest = 50
Pa = 0.1 to 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45Pa = 0.1Pa = 0.15Pa = 0.25Pa = 0.35Pa = 0.45
FitRPDAVGFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPDFitAVGRPD
B.169730.0476889127.54889127.54879126.09889127.54879026.09879026.09879026.09889127.54879126.09879126.09
B.27676080949823.68949823.68949823.68949823.68959825.00969826.32949723.68949823.68949823.68949823.68
B.380800584909212.50919213.75899211.25919213.75919313.75919213.75909212.50919213.75909212.50919313.75
B.4797908210210629.1110210729.1110310730.3810010626.5810110627.8510210629.1110010626.5810110627.8510110627.8510110627.85
B.57272073878920.83858918.06858918.06858918.06868919.44858918.06868919.44868919.44868919.44868919.44
C.1227227022926726917.6226326915.8626426916.3026526916.7426426916.3026526916.7426526816.7426426916.3026726917.6226326815.86
C.2219219022126827222.3726427220.5526527121.0026627121.4626027218.7226827222.3726227119.6326627121.4626727121.9226727221.92
C.3243243024630831226.7530731226.3430631225.9329931223.0530931327.1630831326.7530231224.2830631225.9330231124.2830431225.10
C.4219219022126326620.0926326620.0926326620.0926326620.0926426720.5526226619.6326226619.6326226519.6326326520.0926326620.09
C.5215215021624725414.8824825515.3524725314.8824925515.8125125616.7425025416.2824825415.3524825515.3524825415.3524825515.35
D.160610.0264707216.67707216.67717218.33707216.67697215.00707216.67707216.67707216.67707216.67707216.67
D.266700.0670788018.18788018.18788018.18788018.18788018.18788018.18788018.18788018.18788018.18788018.18
D.372760.0579868819.44868819.44868819.44858818.06868819.44858818.06868819.44878820.83868819.44848816.67
D.462630.0266717214.52717214.52717214.52707212.90717214.52707212.90707212.90717214.52717214.52717214.52
D.561630.0364707114.75707214.75707114.75707114.75707114.75707114.75697113.11707114.75707114.75707114.75
NRE.12929030323210.34323310.34323310.3431336.9031336.90323210.34323310.34323210.3431336.90323210.34
NRE.230310.0332363820363820363820363820363820373823.33363820363820363820373823.33
NRE.327280.0429343625.93353629.63343625.93353629.63343625.93343625.93343625.93343625.93343625.93343625.93
NRE.428300.0731343521.43343521.43343521.43343521.43343521.43343521.43343521.43343521.43343521.43343521.43
NRE.528290.0329323414.29323514.29333517.86323514.29333517.86333517.86333417.86333417.86323414.29333417.86
NRF.114150.0715171821.43181828.57181828.57181828.57171821.43171821.43181828.57181828.57181828.57171821.43
NRF.21515016181920181920181920181920181920181920181920181920181920181920
NRF.314160.1317192135.71192135.71192135.71202142.86202142.86202142.86202142.86202142.86192135.71202142.86
NRF.414150.0715181928.57181928.57181928.57181928.57181928.57181928.57181928.57191935.71181928.57181928.57
NRF.5151501516166.6716166.6716166.6716166.6716176.6716166.6716166.6716166.6716166.6716166.67
NRG.11761880.0619422823429.5522923430.1122723428.9822923430.1123023430.6823223431.8222923430.1123023430.6822723428.9823123531.25
NRG.21541600.0416518619120.7818119017.5318819122.0818719121.4318719121.4318719121.4318719121.4318619120.7818719121.4318519020.13
NRG.31661800.0818219819919.2819719918.6719519917.4719719918.6719519917.4719719918.6719719918.6719719918.6719619918.0719720018.67
NRG.41681770.0518321521927.9821321926.7921422027.3821422127.3821622128.5721521927.9821322026.7921221926.1921521927.9821722029.17
NRG.51681810.0718322022330.9521522227.9822022330.9521822329.7621822329.7621822229.7621822229.7622022330.9521922230.3621922330.36
NRH.163690.0971828430.16818428.57828430.16838431.75838431.75808426.98818428.57838431.75828430.16828430.16
NRH.263660.0567798225.40788123.81798125.40808226.98808226.98788123.81778122.22798125.40798125.40798125.40
NRH.359660.1167757727.12757727.12757727.12757727.12757727.12757727.12757727.12757727.12757727.12757727.12
NRH.46363064737515.87737515.87747617.46747517.46737515.87737515.87737515.87747517.46737515.87747517.46
NRH.555590.0760676821.82676821.82676821.82676821.82676921.82676821.82676821.82676821.82676821.82656818.18
Table 9. p-values for instance 4.10.
Table 9. p-values for instance 4.10.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 1.711 × 10 12 Not applicable
Table 10. p-values for instance 5.10.
Table 10. p-values for instance 5.10.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 2.228 × 10 12 Not applicable
Table 11. p-values for instance 6.5.
Table 11. p-values for instance 6.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 1.711 × 10 12 Not applicable
Table 12. p-values for instance A.5.
Table 12. p-values for instance A.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 3.055 × 10 12 Not applicable
Table 13. p-values for instance b.5.
Table 13. p-values for instance b.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 3.417 × 10 12 Not applicable
Table 14. p-values for instance c.5.
Table 14. p-values for instance c.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 4.100 × 10 12 Not applicable
Table 15. p-values for instance d.5.
Table 15. p-values for instance d.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 1.288 × 10 12 Not applicable
Table 16. p-values for instance NRE.5.
Table 16. p-values for instance NRE.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 1.915 × 10 12 Not applicable
Table 17. p-values for instance NRF.5.
Table 17. p-values for instance NRF.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 2.406 × 10 12 Not applicable
Table 18. p-values for instance NRG.5.
Table 18. p-values for instance NRG.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 3.296 × 10 12 Not applicable
Table 19. p-values for instance NRH.5.
Table 19. p-values for instance NRH.5.
CSSACSDBSCAN
CSNot applicableSWS
SACSDBSCAN 2.482 × 10 12 Not applicable
Table 20. Best values reached by an improved bio-inspired algorithm with ML that solve the set covering problem Instances 4, 5, 6 and A.
Table 20. Best values reached by an improved bio-inspired algorithm with ML that solve the set covering problem Instances 4, 5, 6 and A.
InstanceBKSSACSDBSCANCSADBSCANCSAKmeanKMTA
BestRDPBestRDPBestRDPBestRDP
4.14294290.0004290.0004290.0004300.002
4.25125120.0005120.0005130.002N/RN/R
4.35165160.0005160.0005160.000N/RN/R
4.44944940.0004940.0004950.002N/RN/R
4.55125120.0005120.0005140.004N/RN/R
4.65605600.0005600.0005600.000N/RN/R
4.74304300.0004300.0004300.000N/RN/R
4.84924920.0004920.0004930.002N/RN/R
4.96416410.0006410.0006450.006N/RN/R
4.105145140.0005140.0005130.002N/RN/R
5.12532530.0002530.0002530.0002530.000
5.23023030.0033020.0003080.020N/RN/R
5.32262260.0002260.0002280.009N/RN/R
5.42422420.0002420.0002420.000N/RN/R
5.52112110.0002110.0002110.000N/RN/R
5.62132130.0002130.0002130.000N/RN/R
5.72932930.0002930.0002930.000N/RN/R
5.82882880.0002880.0002880.000N/RN/R
5.92792790.0002790.0002790.000N/RN/R
5.102652650.0002650.0002670.008N/RN/R
6.11381380.0001400.0141400.0141380.000
6.21461460.0001460.0001460.000N/RN/R
6.31451450.0001450.0001470.014N/RN/R
6.41311310.0001310.0001310.000N/RN/R
6.51611610.0001620.0061630.012N/RN/R
A.12532540.0042540.0042540.0042540.004
A.22522540.0082560.0162570.020N/RN/R
A.32322320.0002330.0042350.013N/RN/R
A.42342350.0042360.0092350.004N/RN/R
A.52362360.0002360.0002360.000N/RN/R
AVG0.0010.0020.0050.002
Table 21. Best values reached by an improved bio-inspired algorithm with ML that solve the set covering problem Instances B, C, D, NRE, NRF, NRG, and NRH.
Table 21. Best values reached by an improved bio-inspired algorithm with ML that solve the set covering problem Instances B, C, D, NRE, NRF, NRG, and NRH.
InstanceBKSSACSDBSCANCSADBSCANCSAKmeanKMTA
BestRDPBestRDPBestRDPBestRDP
B.169730.058690.000740.072690.000
B.276760.000810.066830.092N/RN/R
B.380800.000820.025840.050N/RN/R
B.479790.000830.051840.063N/RN/R
B.572720.000780.083720.000N/RN/R
C.12272270.0002330.0262280.0042290.009
C.22192190.0002260.0322260.032N/RN/R
C.32432430.0002530.0412540.045N/RN/R
C.42192190.0002240.0232250.027N/RN/R
C.52152150.0002220.0332150.000N/RN/R
D.160610.017680.133660.100600.000
D.266700.061730.106710.076N/RN/R
D.372760.056820.139820.139N/RN/R
D.462630.016700.129670.081N/RN/R
D.561630.033720.180660.082N/RN/R
NRE.129290.000701.414300.034290.000
NRE.230310.033831.767340.133N/RN/R
NRE.327280.037741.741340.259N/RN/R
NRE.428300.071831.964330.179N/RN/R
NRE.528290.036922.286300.071N/RN/R
NRF.114150.07137225.571170.214140.000
NRF.215150.000743.933180.200N/RN/R
NRF.314160.14333522.929190.357N/RN/R
NRF.414150.071522.714180.286N/RN/R
NRF.515150.000653.333160.067N/RN/R
NRG.11761880.0682870.6311970.1191760.000
NRG.21541600.0392080.3511680.091N/RN/R
NRG.31661800.0842110.2711830.102N/RN/R
NRG.41681770.0542500.4881860.107N/RN/R
NRG.51681810.0772300.3691830.089N/RN/R
NRH.163690.095N/RN/R710.127640.016
NRH.263660.048N/RN/R710.127N/RN/R
NRH.359660.119N/RN/R690.169N/RN/R
NRH.463630.000N/RN/R680.079N/RN/R
NRH.555590.073N/RN/R610.109N/RN/R
AVG0.0211.1620.0590.003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Caselli, N.; Soto, R.; Crawford, B.; Valdivia, S.; Olivares, R. A Self-Adaptive Cuckoo Search Algorithm Using a Machine Learning Technique. Mathematics 2021, 9, 1840. https://doi.org/10.3390/math9161840

AMA Style

Caselli N, Soto R, Crawford B, Valdivia S, Olivares R. A Self-Adaptive Cuckoo Search Algorithm Using a Machine Learning Technique. Mathematics. 2021; 9(16):1840. https://doi.org/10.3390/math9161840

Chicago/Turabian Style

Caselli, Nicolás, Ricardo Soto, Broderick Crawford, Sergio Valdivia, and Rodrigo Olivares. 2021. "A Self-Adaptive Cuckoo Search Algorithm Using a Machine Learning Technique" Mathematics 9, no. 16: 1840. https://doi.org/10.3390/math9161840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop