entropy-logo

Journal Browser

Journal Browser

Swarm Intelligence Optimization: Algorithms and Applications

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 9781

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
Interests: evolutionary computation; swarm intelligence; image processing; machine learning; wireless sensor networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Technology, University of Manage and Technology, Haiphong 180000, Vietnam
Interests: swarm intelligence; evolutionary computation; wireless sensor networks

Special Issue Information

Dear Colleagues,

The swarm intelligence (SI) optimization algorithm is an emerging technology that simulates the evolution of the law of nature and acts of biological communities with simple and robust characteristics. SI is a group that may communicate directly or indirectly with each other; the main body of simple intelligence can show complex characteristics of intelligent behavior through cooperation. The swarm cooperation of flocks of birds, colony ants, bees, fish, and other social behaviors produced by groups of organisms use these features to solve certain types of problems, provide the basis for distributed problems, and can achieve satisfactory results. The characteristics of the SI algorithm have practical significance, and they can be used to improve the deficiencies of the algorithm and to improve its performance. SI algorithms have been successfully applied to various real-world applications in many fields, including industrial, financial, military, etc.

We invite authors to contribute original research articles as well as review articles regarding recent advancements in these diverse research areas. The topics of this Special Issue will not be limited to:

  • Information-theory-based meta-heuristic optimization and applications;
  • Swarm intelligence including Ant Colony Optimization, Particle Swarm Optimization, Cat Swarm Optimization, Artificial Bee Colony, and different optimization algorithms, etc.;
  • Physical-based algorithms such as Gravitational Search Algorithm (GSA), etc.;
  • Nature-inspired meta-heuristic algorithms such as Evolutionary Algorithm, Genetic Algorithm, etc.;
  • Neighborhood search algorithms such as Simulated Annealing, Iterated Local Search, Variable Neighborhood Search, Tabu Search Approach, etc.;
  • New Meta-heuristic approaches/frameworks/operators;
  • Multi-Objective Optimization Algorithm;
  • The parallelization of meta-heuristics;
  • Theoretical and empirical research regarding meta-heuristics;
  • High-impact applications of meta-heuristics;
  • Challenging problems and tasks such as stochastic, multi-objective or dynamic problems;
  • Meta-heuristic automatic configuration;
  • Applications based on swarm intelligence;
  • Taguchi Method for swarm intelligence;
  • Entropy for analysis of swarm intelligence.

Dr. Shu-Chuan Chu
Prof. Dr. Kuo-Hui Yeh
Dr. Trong-The Nguyen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • swarm intelligence
  • meta-heuristics optimization
  • evolutionary algorithm 
  • taguchi method
  • multi-objective optimization
  • tabu search approach

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 537 KiB  
Article
CNN-HT: A Two-Stage Algorithm Selection Framework
by Siyi Xu, Wenwen Liu, Chengpei Wu and Junli Li
Entropy 2024, 26(3), 262; https://doi.org/10.3390/e26030262 - 14 Mar 2024
Viewed by 942
Abstract
The No Free Lunch Theorem tells us that no algorithm can beat other algorithms on all types of problems. The algorithm selection structure is proposed to select the most suitable algorithm from a set of algorithms for an unknown optimization problem. This paper [...] Read more.
The No Free Lunch Theorem tells us that no algorithm can beat other algorithms on all types of problems. The algorithm selection structure is proposed to select the most suitable algorithm from a set of algorithms for an unknown optimization problem. This paper introduces an innovative algorithm selection approach called the CNN-HT, which is a two-stage algorithm selection framework. In the first stage, a Convolutional Neural Network (CNN) is employed to classify problems. In the second stage, the Hypothesis Testing (HT) technique is used to suggest the best-performing algorithm based on the statistical analysis of the performance metric of algorithms that address various problem categories. The two-stage approach can adapt to different algorithm combinations without the need to retrain the entire model, and modifications can be made in the second stage only, which is an improvement of one-stage approaches. To provide a more general structure for the classification model, we adopt Exploratory Landscape Analysis (ELA) features of the problem as input and utilize feature selection techniques to reduce the redundant ones. In problem classification, the average accuracy of classifying problems using CNN is 96%, which demonstrates the advantages of CNN compared to Random Forest and Support Vector Machines. After feature selection, the accuracy increases to 98.8%, further improving the classification performance while reducing the computational cost. This demonstrates the effectiveness of the first stage of the CNN-HT method, which provides a basis for algorithm selection. In the experiments, CNN-HT shows the advantages of the second stage algorithm as well as good performance with better average rankings in different algorithm combinations compared to the individual algorithms and another algorithm combination approach. Full article
(This article belongs to the Special Issue Swarm Intelligence Optimization: Algorithms and Applications)
Show Figures

Figure 1

20 pages, 2680 KiB  
Article
An Enhanced Neural Network Algorithm with Quasi-Oppositional-Based and Chaotic Sine-Cosine Learning Strategies
by Xuan Xiong, Shaobo Li and Fengbin Wu
Entropy 2023, 25(9), 1255; https://doi.org/10.3390/e25091255 - 24 Aug 2023
Viewed by 905
Abstract
Global optimization problems have been a research topic of great interest in various engineering applications among which neural network algorithm (NNA) is one of the most widely used methods. However, it is inevitable for neural network algorithms to plunge into poor local optima [...] Read more.
Global optimization problems have been a research topic of great interest in various engineering applications among which neural network algorithm (NNA) is one of the most widely used methods. However, it is inevitable for neural network algorithms to plunge into poor local optima and convergence when tackling complex optimization problems. To overcome these problems, an improved neural network algorithm with quasi-oppositional-based and chaotic sine-cosine learning strategies is proposed, that speeds up convergence and avoids trapping in a local optimum. Firstly, quasi-oppositional-based learning facilitated the exploration and exploitation of the search space by the improved algorithm. Meanwhile, a new logistic chaotic sine-cosine learning strategy by integrating the logistic chaotic mapping and sine-cosine strategy enhances the ability that jumps out of the local optimum. Moreover, a dynamic tuning factor of piecewise linear chaotic mapping is utilized for the adjustment of the exploration space to improve the convergence performance. Finally, the validity and applicability of the proposed improved algorithm are evaluated by the challenging CEC 2017 function and three engineering optimization problems. The experimental comparative results of average, standard deviation, and Wilcoxon rank-sum tests reveal that the presented algorithm has excellent global optimality and convergence speed for most functions and engineering problems. Full article
(This article belongs to the Special Issue Swarm Intelligence Optimization: Algorithms and Applications)
Show Figures

Figure 1

19 pages, 1129 KiB  
Article
A Many-Objective Evolutionary Algorithm Based on Dual Selection Strategy
by Cheng Peng, Cai Dai and Xingsi Xue
Entropy 2023, 25(7), 1015; https://doi.org/10.3390/e25071015 - 1 Jul 2023
Viewed by 1106
Abstract
In high-dimensional space, most multi-objective optimization algorithms encounter difficulties in solving many-objective optimization problems because they cannot balance convergence and diversity. As the number of objectives increases, the non-dominated solutions become difficult to distinguish while challenging the assessment of diversity in high-dimensional objective [...] Read more.
In high-dimensional space, most multi-objective optimization algorithms encounter difficulties in solving many-objective optimization problems because they cannot balance convergence and diversity. As the number of objectives increases, the non-dominated solutions become difficult to distinguish while challenging the assessment of diversity in high-dimensional objective space. To reduce selection pressure and improve diversity, this article proposes a many-objective evolutionary algorithm based on dual selection strategy (MaOEA/DS). First, a new distance function is designed as an effective distance metric. Then, based distance function, a point crowding-degree (PC) strategy, is proposed to further enhance the algorithm’s ability to distinguish superior solutions in population. Finally, a dual selection strategy is proposed. In the first selection, the individuals with the best convergence are selected from the top few individuals with good diversity in the population, focusing on population convergence. In the second selection, the PC strategy is used to further select individuals with larger crowding distance values, emphasizing population diversity. To extensively evaluate the performance of the algorithm, this paper compares the proposed algorithm with several state-of-the-art algorithms. The experimental results show that MaOEA/DS outperforms other comparison algorithms in overall performance, indicating the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Swarm Intelligence Optimization: Algorithms and Applications)
Show Figures

Figure 1

20 pages, 3654 KiB  
Article
Automatic Analysis of Transverse Musculoskeletal Ultrasound Images Based on the Multi-Task Learning Model
by Linxueying Zhou, Shangkun Liu and Weimin Zheng
Entropy 2023, 25(4), 662; https://doi.org/10.3390/e25040662 - 14 Apr 2023
Cited by 6 | Viewed by 1382
Abstract
Musculoskeletal ultrasound imaging is an important basis for the early screening and accurate treatment of muscle disorders. It allows the observation of muscle status to screen for underlying neuromuscular diseases including myasthenia gravis, myotonic dystrophy, and ankylosing muscular dystrophy. Due to the complexity [...] Read more.
Musculoskeletal ultrasound imaging is an important basis for the early screening and accurate treatment of muscle disorders. It allows the observation of muscle status to screen for underlying neuromuscular diseases including myasthenia gravis, myotonic dystrophy, and ankylosing muscular dystrophy. Due to the complexity of skeletal muscle ultrasound image noise, it is a tedious and time-consuming process to analyze. Therefore, we proposed a multi-task learning-based approach to automatically segment and initially diagnose transverse musculoskeletal ultrasound images. The method implements muscle cross-sectional area (CSA) segmentation and abnormal muscle classification by constructing a multi-task model based on multi-scale fusion and attention mechanisms (MMA-Net). The model exploits the correlation between tasks by sharing a part of the shallow network and adding connections to exchange information in the deep network. The multi-scale feature fusion module and attention mechanism were added to MMA-Net to increase the receptive field and enhance the feature extraction ability. Experiments were conducted using a total of 1827 medial gastrocnemius ultrasound images from multiple subjects. Ten percent of the samples were randomly selected for testing, 10% as the validation set, and the remaining 80% as the training set. The results show that the proposed network structure and the added modules are effective. Compared with advanced single-task models and existing analysis methods, our method has a better performance at classification and segmentation. The mean Dice coefficients and IoU of muscle cross-sectional area segmentation were 96.74% and 94.10%, respectively. The accuracy and recall of abnormal muscle classification were 95.60% and 94.96%. The proposed method achieves convenient and accurate analysis of transverse musculoskeletal ultrasound images, which can assist physicians in the diagnosis and treatment of muscle diseases from multiple perspectives. Full article
(This article belongs to the Special Issue Swarm Intelligence Optimization: Algorithms and Applications)
Show Figures

Figure 1

25 pages, 1326 KiB  
Article
Binary Bamboo Forest Growth Optimization Algorithm for Feature Selection Problem
by Jeng-Shyang Pan, Longkang Yue, Shu-Chuan Chu, Pei Hu, Bin Yan and Hongmei Yang
Entropy 2023, 25(2), 314; https://doi.org/10.3390/e25020314 - 8 Feb 2023
Cited by 7 | Viewed by 2030
Abstract
Inspired by the bamboo growth process, Chu et al. proposed the Bamboo Forest Growth Optimization (BFGO) algorithm. It incorporates bamboo whip extension and bamboo shoot growth into the optimization process. It can be applied very well to classical engineering problems. However, binary values [...] Read more.
Inspired by the bamboo growth process, Chu et al. proposed the Bamboo Forest Growth Optimization (BFGO) algorithm. It incorporates bamboo whip extension and bamboo shoot growth into the optimization process. It can be applied very well to classical engineering problems. However, binary values can only take 0 or 1, and for some binary optimization problems, the standard BFGO is not applicable. This paper firstly proposes a binary version of BFGO, called BBFGO. By analyzing the search space of BFGO under binary conditions, the new curve V-shaped and Taper-shaped transfer function for converting continuous values into binary BFGO is proposed for the first time. A long-mutation strategy with a new mutation approach is presented to solve the algorithmic stagnation problem. Binary BFGO and the long-mutation strategy with a new mutation are tested on 23 benchmark test functions. The experimental results show that binary BFGO achieves better results in solving the optimal values and convergence speed, and the variation strategy can significantly enhance the algorithm’s performance. In terms of application, 12 data sets derived from the UCI machine learning repository are selected for feature-selection implementation and compared with the transfer functions used by BGWO-a, BPSO-TVMS and BQUATRE, which demonstrates binary BFGO algorithm’s potential to explore the attribute space and choose the most significant features for classification issues. Full article
(This article belongs to the Special Issue Swarm Intelligence Optimization: Algorithms and Applications)
Show Figures

Figure 1

14 pages, 596 KiB  
Article
A Parallel Multiobjective PSO Weighted Average Clustering Algorithm Based on Apache Spark
by Huidong Ling, Xinmu Zhu, Tao Zhu, Mingxing Nie, Zhenghai Liu and Zhenyu Liu
Entropy 2023, 25(2), 259; https://doi.org/10.3390/e25020259 - 31 Jan 2023
Cited by 2 | Viewed by 1451
Abstract
Multiobjective clustering algorithm using particle swarm optimization has been applied successfully in some applications. However, existing algorithms are implemented on a single machine and cannot be directly parallelized on a cluster, which makes it difficult for existing algorithms to handle large-scale data. With [...] Read more.
Multiobjective clustering algorithm using particle swarm optimization has been applied successfully in some applications. However, existing algorithms are implemented on a single machine and cannot be directly parallelized on a cluster, which makes it difficult for existing algorithms to handle large-scale data. With the development of distributed parallel computing framework, data parallelism was proposed. However, the increase in parallelism will lead to the problem of unbalanced data distribution affecting the clustering effect. In this paper, we propose a parallel multiobjective PSO weighted average clustering algorithm based on apache Spark (Spark-MOPSO-Avg). First, the entire data set is divided into multiple partitions and cached in memory using the distributed parallel and memory-based computing of Apache Spark. The local fitness value of the particle is calculated in parallel according to the data in the partition. After the calculation is completed, only particle information is transmitted, and there is no need to transmit a large number of data objects between each node, reducing the communication of data in the network and thus effectively reducing the algorithm’s running time. Second, a weighted average calculation of the local fitness values is performed to improve the problem of unbalanced data distribution affecting the results. Experimental results show that the Spark-MOPSO-Avg algorithm achieves lower information loss under data parallelism, losing about 1% to 9% accuracy, but can effectively reduce the algorithm time overhead. It shows good execution efficiency and parallel computing capability under the Spark distributed cluster. Full article
(This article belongs to the Special Issue Swarm Intelligence Optimization: Algorithms and Applications)
Show Figures

Figure 1

Back to TopTop