Nature-Inspired Algorithms in Machine Learning

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 37646

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, 30-059 Kraków, Poland
Interests: computational intelligence; data mining; metaheuristics; dimensionality reduction; unsupervised learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Systems Research Institute, Polish Academy of Sciences, 01-447 Warsaw, Poland
Interests: data mining; artificial intelligence; computational intelligence; neural networks; metaheuristics; supervised learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We cordially invite you to submit your papers to the Special Issue “Nature-Inspired Algorithms in Machine Learning” of Algorithms, an established MPDI journal indexed—among others—in Clarivate Web of Science and Scopus.

Machine learning algorithms are currently omnipresent in a variety of practical solutions spanning from space engineering to ecommerce. Apart from the standard statistical approach nature-inspired algorithms are also frequently used in this area. It is due to the complexity of the data exploration tasks and the possibility of including additional factors into the scheme of nature-inspired algorithm.

Our Special Issue will accept a broad range of new advances in the field of nature-inspired machine learning algorithms. We invite contributions describing new techniques, novel evaluation criteria, interesting case-studies, as well as papers dealing with specific variants of existing algorithms and challenges of Big Data. A limited number of state-of-art reviews will also be considered for publication.

Please feel free to contribute as well as to contact us with any questions and concerns.

Dr. Szymon Łukasik
Prof. Piotr A. Kowalski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Data science/data mining
  • Clustering
  • Classification
  • Outlier detection
  • Dimensionality reduction
  • Unsupervised learning
  • Supervised learning
  • Nature-inspired algorithms
  • Metaheuristics

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

10 pages, 295 KiB  
Article
Using Deep-Learned Vector Representations for Page Stream Segmentation by Agglomerative Clustering
by Lukas Busch, Ruben van Heusden and Maarten Marx
Algorithms 2023, 16(5), 259; https://doi.org/10.3390/a16050259 - 18 May 2023
Cited by 1 | Viewed by 994
Abstract
Page stream segmentation (PSS) is the task of retrieving the boundaries that separate source documents given a consecutive stream of documents (for example, sequentially scanned PDF files). The task has recently gained more interest as a result of the digitization efforts of various [...] Read more.
Page stream segmentation (PSS) is the task of retrieving the boundaries that separate source documents given a consecutive stream of documents (for example, sequentially scanned PDF files). The task has recently gained more interest as a result of the digitization efforts of various companies and organizations, as they move towards having all their documents available online for improved searchability and accessibility for users. The current state-of-the-art approach is neural start of document page classification on representations of the text and/or images of pages using models such as Visual Geometry Group-16 (VGG-16) and BERT to classify individual pages. We view the task of PSS as a clustering task instead, hypothesizing that pages from one document are similar to each other and different to pages in other documents, something that is difficult to incorporate in the current approaches. We compare the segmentation performance of an agglomerative clustering method with a binary classification model based on images on a new publicly available dataset and experiment with using either pretrained or finetuned image vectors as inputs to the model. To adapt the clustering method to PSS, we propose the switch method to alleviate the effects of pages of the same class having a high similarity, and report an improvement in the scores using this method. Unfortunately, neither clustering with pretrained embeddings nor clustering with finetuned embeddings outperformed start of document page classification for PSS. However, clustering with either pretrained or finetuned representations is substantially more effective than the baseline, with finetuned embeddings outperforming pretrained embeddings. Finally, having the number of documents K as part of the input, in our use case a realistic assumption, has a surprisingly significant positive effect. In contrast to earlier papers, we evaluate PSS with the overlap weighted partial match F1 score, developed as a Panoptic Quality in the computer vision domain, a metric that is particularly well-suited to PSS as it can be used to measure document segmentation. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

18 pages, 474 KiB  
Article
Consensus Big Data Clustering for Bayesian Mixture Models
by Christos Karras, Aristeidis Karras, Konstantinos C. Giotopoulos, Markos Avlonitis and Spyros Sioutas
Algorithms 2023, 16(5), 245; https://doi.org/10.3390/a16050245 - 09 May 2023
Cited by 4 | Viewed by 1754
Abstract
In the context of big-data analysis, the clustering technique holds significant importance for the effective categorization and organization of extensive datasets. However, pinpointing the ideal number of clusters and handling high-dimensional data can be challenging. To tackle these issues, several strategies have been [...] Read more.
In the context of big-data analysis, the clustering technique holds significant importance for the effective categorization and organization of extensive datasets. However, pinpointing the ideal number of clusters and handling high-dimensional data can be challenging. To tackle these issues, several strategies have been suggested, such as a consensus clustering ensemble that yields more significant outcomes compared to individual models. Another valuable technique for cluster analysis is Bayesian mixture modelling, which is known for its adaptability in determining cluster numbers. Traditional inference methods such as Markov chain Monte Carlo may be computationally demanding and limit the exploration of the posterior distribution. In this work, we introduce an innovative approach that combines consensus clustering and Bayesian mixture models to improve big-data management and simplify the process of identifying the optimal number of clusters in diverse real-world scenarios. By addressing the aforementioned hurdles and boosting accuracy and efficiency, our method considerably enhances cluster analysis. This fusion of techniques offers a powerful tool for managing and examining large and intricate datasets, with possible applications across various industries. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

25 pages, 2393 KiB  
Article
Model of Lexico-Semantic Bonds between Texts for Creating Their Similarity Metrics and Developing Statistical Clustering Algorithm
by Liliya Demidova, Dmitry Zhukov, Elena Andrianova and Vladimir Kalinin
Algorithms 2023, 16(4), 198; https://doi.org/10.3390/a16040198 - 05 Apr 2023
Viewed by 1379
Abstract
To solve the problem of text clustering according to semantic groups, we suggest using a model of a unified lexico-semantic bond between texts and a similarity matrix based on it. Using lexico-semantic analysis methods, we can create “term–document” matrices based both on the [...] Read more.
To solve the problem of text clustering according to semantic groups, we suggest using a model of a unified lexico-semantic bond between texts and a similarity matrix based on it. Using lexico-semantic analysis methods, we can create “term–document” matrices based both on the occurrence frequencies of words and n-grams and the determination of the degrees of nodes in their semantic network, followed by calculating the cosine metrics of text similarity. In the process of the construction of the text similarity matrix using lexical or semantic analysis methods, the cosine of the angle for a vector pair describing such texts will determine the degree of similarity in the lexical or semantic presentation, respectively. Based on the averaging procedure described in this paper, we can obtain a matrix of cosine metric values that describes the lexico-semantic bonds between texts. We propose an algorithm for solving text clustering problems. This algorithm allows one to use the statistical characteristics of the distribution functions of element values in the rows of the cosine metric value matrix in the model of the lexico-semantic bond between documents. In addition, this algorithm allows one to separately describe the matrix of the cosine metric values obtained separately based on the lexical or semantic properties of texts. Our research has shown that the developed model for the lexico-semantic presentation of texts allows one to slightly increase the accuracy of their subsequent clustering. The statistical text clustering algorithm based on this model shows excellent results that are comparable to those of the widely used affinity propagation algorithm. Additionally, our algorithm does not require specification of the degree of similarity for combining vectors into a common cluster and other configuration parameters. The suggested model and algorithm significantly expand the list of known approaches for determining text similarity metrics and their clustering. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

17 pages, 820 KiB  
Article
Learning Distributed Representations and Deep Embedded Clustering of Texts
by Shuang Wang, Amin Beheshti, Yufei Wang, Jianchao Lu, Quan Z. Sheng, Stephen Elbourn and Hamid Alinejad-Rokny
Algorithms 2023, 16(3), 158; https://doi.org/10.3390/a16030158 - 13 Mar 2023
Cited by 1 | Viewed by 1660
Abstract
Instructors face significant time and effort constraints when grading students’ assessments on a large scale. Clustering similar assessments is a unique and effective technique that has the potential to significantly reduce the workload of instructors in online and large-scale learning environments. By grouping [...] Read more.
Instructors face significant time and effort constraints when grading students’ assessments on a large scale. Clustering similar assessments is a unique and effective technique that has the potential to significantly reduce the workload of instructors in online and large-scale learning environments. By grouping together similar assessments, marking one assessment in a cluster can be scaled to other similar assessments, allowing for a more efficient and streamlined grading process. To address this issue, this paper focuses on text assessments and proposes a method for reducing the workload of instructors by clustering similar assessments. The proposed method involves the use of distributed representation to transform texts into vectors, and contrastive learning to improve the representation that distinguishes the differences among similar texts. The paper presents a general framework for clustering similar texts that includes label representation, K-means, and self-organization map algorithms, with the objective of improving clustering performance using Accuracy (ACC) and Normalized Mutual Information (NMI) metrics. The proposed framework is evaluated experimentally using two real datasets. The results show that self-organization maps and K-means algorithms with Pre-trained language models outperform label representation algorithms for different datasets. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

23 pages, 813 KiB  
Article
Agglomerative Clustering with Threshold Optimization via Extreme Value Theory
by Chunchun Li, Manuel Günther, Akshay Raj Dhamija, Steve Cruz, Mohsen Jafarzadeh, Touqeer Ahmad and Terrance E. Boult
Algorithms 2022, 15(5), 170; https://doi.org/10.3390/a15050170 - 20 May 2022
Cited by 3 | Viewed by 3160
Abstract
Clustering is a critical part of many tasks and, in most applications, the number of clusters in the data are unknown and must be estimated. This paper presents an Extreme Value Theory-based approach to threshold selection for clustering, proving that the “correct” linkage [...] Read more.
Clustering is a critical part of many tasks and, in most applications, the number of clusters in the data are unknown and must be estimated. This paper presents an Extreme Value Theory-based approach to threshold selection for clustering, proving that the “correct” linkage distances must follow a Weibull distribution for smooth feature spaces. Deep networks and their associated deep features have transformed many aspects of learning, and this paper shows they are consistent with our extreme-linkage theory and provide Unreasonable Clusterability. We show how our novel threshold selection can be applied to both classic agglomerative clustering and the more recent FINCH (First Integer Neighbor Clustering Hierarchy) algorithm. Our evaluation utilizes over a dozen different large-scale vision datasets/subsets, including multiple face-clustering datasets and ImageNet for both in-domain and, more importantly, out-of-domain object clustering. Across multiple deep features clustering tasks with very different characteristics, our novel automated threshold selection performs well, often outperforming state-of-the-art clustering techniques even when they select parameters on the test set. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

27 pages, 706 KiB  
Article
Binary Horse Optimization Algorithm for Feature Selection
by Dorin Moldovan
Algorithms 2022, 15(5), 156; https://doi.org/10.3390/a15050156 - 06 May 2022
Cited by 6 | Viewed by 3887
Abstract
The bio-inspired research field has evolved greatly in the last few years due to the large number of novel proposed algorithms and their applications. The sources of inspiration for these novel bio-inspired algorithms are various, ranging from the behavior of groups of animals [...] Read more.
The bio-inspired research field has evolved greatly in the last few years due to the large number of novel proposed algorithms and their applications. The sources of inspiration for these novel bio-inspired algorithms are various, ranging from the behavior of groups of animals to the properties of various plants. One problem is the lack of one bio-inspired algorithm which can produce the best global solution for all types of optimization problems. The presented solution considers the proposal of a novel approach for feature selection in classification problems, which is based on a binary version of a novel bio-inspired algorithm. The principal contributions of this article are: (1) the presentation of the main steps of the original Horse Optimization Algorithm (HOA), (2) the adaptation of the HOA to a binary version called the Binary Horse Optimization Algorithm (BHOA), (3) the application of the BHOA in feature selection using nine state-of-the-art datasets from the UCI machine learning repository and the classifiers Random Forest (RF), Support Vector Machines (SVM), Gradient Boosted Trees (GBT), Logistic Regression (LR), K-Nearest Neighbors (K-NN), and Naïve Bayes (NB), and (4) the comparison of the results with the ones obtained using the Binary Grey Wolf Optimizer (BGWO), Binary Particle Swarm Optimization (BPSO), and Binary Crow Search Algorithm (BCSA). The experiments show that the BHOA is effective and robust, as it returned the best mean accuracy value and the best accuracy value for four and seven datasets, respectively, compared to BGWO, BPSO, and BCSA, which returned the best mean accuracy value for four, two, and two datasets, respectively, and the best accuracy value for eight, seven, and five datasets, respectively. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

35 pages, 3759 KiB  
Article
Partitioning of Transportation Networks by Efficient Evolutionary Clustering and Density Peaks
by Pamela Al Alam, Joseph Constantin, Ibtissam Constantin and Clelia Lopez
Algorithms 2022, 15(3), 76; https://doi.org/10.3390/a15030076 - 24 Feb 2022
Cited by 1 | Viewed by 2649
Abstract
Road traffic congestion has became a major problem in most countries because it affects sustainable mobility. Partitioning a transport network into homogeneous areas can be very useful for monitoring traffic as congestion is spatially correlated in adjacent roads, and it propagates at different [...] Read more.
Road traffic congestion has became a major problem in most countries because it affects sustainable mobility. Partitioning a transport network into homogeneous areas can be very useful for monitoring traffic as congestion is spatially correlated in adjacent roads, and it propagates at different speeds as a function of time. Spectral clustering has been successfully applied for the partitioning of transportation networks based on the spatial characteristics of congestion at a specific time. However, this type of classification is not suitable for data that change over time. Evolutionary spectral clustering represents a state-of-the-art algorithm for grouping objects evolving over time. However, the disadvantages of this algorithm are the cubic time complexity and the high memory demand, which make it insufficient to handle a large number of data sets. In this paper, we propose an efficient evolutionary spectral clustering algorithm that solves the drawbacks of evolutionary spectral clustering by reducing the size of the eigenvalue problem. This algorithm is applied in a dynamic environment to partition a transportation network into connected homogeneous regions that evolve with time. The number of clusters is selected automatically by using a density peak algorithm adopted for the classification of traffic congestion based on the sparse snake similarity matrix. Experiments on the real network of Amsterdam city demonstrate the superiority of the proposed algorithm in robustness and effectiveness. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

22 pages, 24821 KiB  
Article
Test and Validation of the Surrogate-Based, Multi-Objective GOMORS Algorithm against the NSGA-II Algorithm in Structural Shape Optimization
by Yannis Werner, Tim van Hout, Vijey Subramani Raja Gopalan and Thomas Vietor
Algorithms 2022, 15(2), 46; https://doi.org/10.3390/a15020046 - 28 Jan 2022
Viewed by 2955
Abstract
Nowadays, product development times are constantly decreasing, while the requirements for the products themselves increased significantly in the last decade. Hence, manufacturers use Computer-Aided Design (CAD) and Finite-Element (FE) Methods to develop better products in shorter times. Shape optimization offers great potential to [...] Read more.
Nowadays, product development times are constantly decreasing, while the requirements for the products themselves increased significantly in the last decade. Hence, manufacturers use Computer-Aided Design (CAD) and Finite-Element (FE) Methods to develop better products in shorter times. Shape optimization offers great potential to improve many high-fidelity, numerical problems such as the crash performance of cars. Still, the proper selection of optimization algorithms provides a great potential to increase the speed of the optimization time. This article reviews the optimization performance of two different algorithms and frameworks for the structural behavior of a b-pillar. A b-pillar is the structural component between a car’s front and rear door, loaded under static and crash requirements. Furthermore, the validation of the algorithm includes a feasibility constraint. Recently, an optimization routine was implemented and validated for a Non-dominated Sorting Genetic Algorithm (NSGA-II) implementation. Different multi-objective optimization algorithms are reviewed and methodically ranked in a comparative study by given criteria. In this case, the Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS) framework is chosen and implemented into the existing Institut für Konstruktionstechnik Optimizes Shapes (IKOS) framework. Specifically, the article compares the NSGA-II and GOMORS directly for a linear, non-linear, and feasibility optimization scenario. The results show that the GOMORS outperforms the NSGA-II vastly regarding the number of function calls and Pareto-efficient results without the feasibility constraint. The problem is reformulated to an unconstrained, three-objective optimization problem to analyze the influence of the constraint. The constrained and unconstrained approaches show equal performance for the given scenarios. Accordingly, the authors provide a clear recommendation towards the surrogate-based GOMORS for costly and multi-objective evaluations. Furthermore, the algorithm can handle the feasibility constraint properly when formulated as an objective function and as a constraint. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

10 pages, 317 KiB  
Article
Clustering with Nature-Inspired Algorithm Based on Territorial Behavior of Predatory Animals
by Maciej Trzciński, Piotr A. Kowalski and Szymon Łukasik
Algorithms 2022, 15(2), 43; https://doi.org/10.3390/a15020043 - 28 Jan 2022
Cited by 1 | Viewed by 2434
Abstract
Clustering constitutes a well-known problem of division of unlabelled dataset into disjoint groups of data elements. It can be tackled with standard statistical methods but also with metaheuristics, which offer more flexibility and decent performance. The paper studies the application of the clustering [...] Read more.
Clustering constitutes a well-known problem of division of unlabelled dataset into disjoint groups of data elements. It can be tackled with standard statistical methods but also with metaheuristics, which offer more flexibility and decent performance. The paper studies the application of the clustering algorithm—inspired by the territorial behaviors of predatory animals—named the Predatory Animals Algorithm (or, in short: PAA). Besides the description of the PAA, the results of its experimental evaluation, with regards to the classic k-means algorithm, are provided. It is concluded that the application of newly-created nature-inspired technique brings very promising outcomes. The discussion of obtained results is followed by areas of possible improvements and plans for further research. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

20 pages, 491 KiB  
Article
An Empirical Study of Cluster-Based MOEA/D Bare Bones PSO for Data Clustering
by Daphne Teck Ching Lai and Yuji Sato
Algorithms 2021, 14(11), 338; https://doi.org/10.3390/a14110338 - 22 Nov 2021
Cited by 3 | Viewed by 2763
Abstract
Previously, cluster-based multi or many objective function techniques were proposed to reduce the Pareto set. Recently, researchers proposed such techniques to find better solutions in the objective space to solve engineering problems. In this work, we applied a cluster-based approach for solution selection [...] Read more.
Previously, cluster-based multi or many objective function techniques were proposed to reduce the Pareto set. Recently, researchers proposed such techniques to find better solutions in the objective space to solve engineering problems. In this work, we applied a cluster-based approach for solution selection in a multiobjective evolutionary algorithm based on decomposition with bare bones particle swarm optimization for data clustering and investigated its clustering performance. In our previous work, we found that MOEA/D with BBPSO performed the best on 10 datasets. Here, we extend this work applying a cluster-based approach tested on 13 UCI datasets. We compared with six multiobjective evolutionary clustering algorithms from the existing literature and ten from our previous work. The proposed technique was found to perform well on datasets highly overlapping clusters, such as CMC and Sonar. So far, we found only one work that used cluster-based MOEA for clustering data, the hierarchical topology multiobjective clustering algorithm. All other cluster-based MOEA found were used to solve other problems that are not data clustering problems. By clustering Pareto solutions and evaluating new candidates against the found cluster representatives, local search is introduced in the solution selection process within the objective space, which can be effective on datasets with highly overlapping clusters. This is an added layer of search control in the objective space. The results are found to be promising, prompting different areas of future research which are discussed, including the study of its effects with an increasing number of clusters as well as with other objective functions. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Graphical abstract

21 pages, 10092 KiB  
Article
Bio-Inspired Algorithms and Its Applications for Optimization in Fuzzy Clustering
by Fevrier Valdez, Oscar Castillo and Patricia Melin
Algorithms 2021, 14(4), 122; https://doi.org/10.3390/a14040122 - 12 Apr 2021
Cited by 28 | Viewed by 5220
Abstract
In recent years, new metaheuristic algorithms have been developed taking as reference the inspiration on biological and natural phenomena. This nature-inspired approach for algorithm development has been widely used by many researchers in solving optimization problems. These algorithms have been compared with the [...] Read more.
In recent years, new metaheuristic algorithms have been developed taking as reference the inspiration on biological and natural phenomena. This nature-inspired approach for algorithm development has been widely used by many researchers in solving optimization problems. These algorithms have been compared with the traditional ones and have demonstrated to be superior in many complex problems. This paper attempts to describe the algorithms based on nature, which are used in optimizing fuzzy clustering in real-world applications. We briefly describe the optimization methods, the most cited ones, nature-inspired algorithms that have been published in recent years, authors, networks and relationship of the works, etc. We believe the paper can serve as a basis for analysis of the new area of nature and bio-inspired optimization of fuzzy clustering. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

32 pages, 652 KiB  
Article
Nature-Inspired Optimization Algorithms for Text Document Clustering—A Comprehensive Analysis
by Laith Abualigah, Amir H. Gandomi, Mohamed Abd Elaziz, Abdelazim G. Hussien, Ahmad M. Khasawneh, Mohammad Alshinwan and Essam H. Houssein
Algorithms 2020, 13(12), 345; https://doi.org/10.3390/a13120345 - 18 Dec 2020
Cited by 66 | Viewed by 5725
Abstract
Text clustering is one of the efficient unsupervised learning techniques used to partition a huge number of text documents into a subset of clusters. In which, each cluster contains similar documents and the clusters contain dissimilar text documents. Nature-inspired optimization algorithms have been [...] Read more.
Text clustering is one of the efficient unsupervised learning techniques used to partition a huge number of text documents into a subset of clusters. In which, each cluster contains similar documents and the clusters contain dissimilar text documents. Nature-inspired optimization algorithms have been successfully used to solve various optimization problems, including text document clustering problems. In this paper, a comprehensive review is presented to show the most related nature-inspired algorithms that have been used in solving the text clustering problem. Moreover, comprehensive experiments are conducted and analyzed to show the performance of the common well-know nature-inspired optimization algorithms in solving the text document clustering problems including Harmony Search (HS) Algorithm, Genetic Algorithm (GA), Particle Swarm Optimization (PSO) Algorithm, Ant Colony Optimization (ACO), Krill Herd Algorithm (KHA), Cuckoo Search (CS) Algorithm, Gray Wolf Optimizer (GWO), and Bat-inspired Algorithm (BA). Seven text benchmark datasets are used to validate the performance of the tested algorithms. The results showed that the performance of the well-known nurture-inspired optimization algorithms almost the same with slight differences. For improvement purposes, new modified versions of the tested algorithms can be proposed and tested to tackle the text clustering problems. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

Back to TopTop