Evolutionary Computation for Deep Learning and Machine Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 20268

Special Issue Editors

School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, China
Interests: deep learning; evolutionary computation; machine learning; computer vision

E-Mail Website
Guest Editor
School of Computer Science, China West Normal University, Nanchong 637001, China
Interests: pattern recognition

E-Mail Website
Guest Editor
NICE Research Group, Department of Computer Science, University of Surrey, Guildford GU2 7XH, UK
Interests: algorithmics; hybrid heuristic-exact optimisation; memetic computing; differential evolution; membrane computing

Special Issue Information

Dear Colleagues,

Evolutionary computation technique has been widely used for addressing various challenging problems due to its powerful global search ability. There are many complex optimization tasks in the fields of deep learning and machine learning such as neural architecture search, hyper-parameter search, feature selection, feature construction, etc. This workshop aims to collect original papers that develop new evolutionary computation techniques to address any kind of deep learning and machine learning tasks. For all the aforementioned topics, we kindly invite the scientific community to contribute to this Special Issue by submitting novel and original research related but not limited to the following topics:

  • Neural Architecture Search (NAS)
  • Hyper Parameters Optimization
  • Evolutionary Deep Learning/Evolving Deep Learning
  • Evolutionary Deep Neural Networks
  • Evolutionary Computation for Deep Neural Networks
  • Evolutionary Neural Architecture Search (ENAS)
  • Evolving Generative Adversarial Networks
  • Evolutionary Recurrent Neural Network
  • Evolutionary Differentiable Neural Architecture Search
  • Searching for Activation Functions
  • Deep Neuroevolution
  • Neural Networks with Evolving Structure
  • AutoML
  • Multi-objective Neural Architecture Search
  • Evolutionary Optimization of Deep Learning
  • Evolutionary Computation for Neural Architecture Search
  • Hyper-parameter Tuning with Evolutionary Computation
  • Hyper-parameter Optimization
  • Evolutionary Computation for Hyper-parameter Optimization
  • Evolutionary Computation for Automatic Machine Learning
  • Evolutionary Transfer Learning
  • Differentiable NAS
  • Differentiable Architecture Search
  • Hybridization of Evolutionary Computation and Neural Networks
  • Large-scale Optimization for Evolutionary Deep Learning
  • Evolutionary Multi-task Optimization in Deep Learning
  • Self-adaptive Evolutionary NAS
  • EvolNAS
  • NASNet
  • Neuroevolution
  • Hyper-parameter Tuning with Self-adaptive Evolutionary Algorithm
  • Evolutionary Computation in Deep Learning for Regression/Clustering/Classification
  • Full-space Neural Architecture Search
  • Evolving Neural Networks
  • Automatic Design of Neural Architectures
  • Evolutionary Neural Networks
  • Feature Selection, Extraction, and Dimensionality Reduction on High-dimensional and Large-scale Data
  • Evolutionary Feature Selection and Construction
  • Multi-objective Feature Selection/Multi-object classification/ Multi-object clustering
  • Multi-task optimization, Multi-task learning, Meta learning
  • Learning Based Optimization
  • Hybridization of Evolutionary Computation and Cost-sensitive Classification/Clustering
  • Bi-level Optimization (BLO)
  • Hybridization of Evolutionary Computation and Class-imbalance Classification/Clustering
  • Numerical Optimization/Combination optimization/ Multi-objective optimization
  • Genetic Algorithm/Genetic Programming/Particle Swarm Optimization/Ant Colony Optimization/Artificial Bee Colony/Differential Evolution/Fireworks Algorithm/Brain Storm Optimization
  • Classification/Clustering/Regression
  • Machine Learning/Data Mining/Neural Network/Deep Learning/Support Vector Machine/Decision Tree/Deep Neural Network/Convolutional Neural Network/Reinforcement Learning/Ensemble Learning/K-means
  • Real-world Applications of Evolutionary Computation and Machine Learning, e.g., Images and Video Sequences/Analysis, Face Recognition, Gene Analysis, Biomarker Detection, Medical Data Analysis, Text mining, Intrusion Detection Systems, Vehicle Routing, Computer Vision, Natural Language Processing, Speech Recognition, etc.

Dr. Yu Xue
Prof. Dr. Chunlin He
Prof. Dr. Ferrante Neri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 1429 KiB  
Article
Theoretical Framework and Practical Considerations for Achieving Superior Multi-Robot Exploration: Hybrid Cheetah Optimization with Intelligent Initial Configurations
by Ali El Romeh and Seyedali Mirjalili
Mathematics 2023, 11(20), 4239; https://doi.org/10.3390/math11204239 - 10 Oct 2023
Cited by 1 | Viewed by 1105
Abstract
Efficient exploration in multi-robot systems is significantly influenced by the initial start positions of the robots. This paper introduces the hybrid cheetah exploration technique with intelligent initial configuration (HCETIIC), a novel strategy explicitly designed to optimize exploration efficiency across varying initial start configurations: [...] Read more.
Efficient exploration in multi-robot systems is significantly influenced by the initial start positions of the robots. This paper introduces the hybrid cheetah exploration technique with intelligent initial configuration (HCETIIC), a novel strategy explicitly designed to optimize exploration efficiency across varying initial start configurations: uniform distribution, centralized position, random positions, perimeter positions, clustered positions, and strategic positions. To establish the effectiveness of HCETIIC, we engage in a comparative analysis with four other prevalent hybrid methods in the domain. These methods amalgamate the principles of coordinated multi-robot exploration (CME) with different metaheuristic algorithms and have demonstrated compelling results in their respective studies. The performance comparison is based on essential measures such as runtime, the percentage of the explored area, and failure rate. The empirical results reveal that the proposed HCETIIC method consistently outperforms the compared strategies across different start positions, thereby emphasizing its considerable potential for enhancing efficiency in multi-robot exploration tasks across a wide range of real-world scenarios. This research underscores the critical, yet often overlooked, role of the initial robot configuration in multi-robot exploration, establishing a new direction for further improvements in this field. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

18 pages, 865 KiB  
Article
Phenotype Analysis of Arabidopsis thaliana Based on Optimized Multi-Task Learning
by Peisen Yuan, Shuning Xu, Zhaoyu Zhai and Huanliang Xu
Mathematics 2023, 11(18), 3821; https://doi.org/10.3390/math11183821 - 06 Sep 2023
Viewed by 696
Abstract
Deep learning techniques play an important role in plant phenotype research, due to their powerful data processing and modeling capabilities. Multi-task learning has been researched for plant phenotype analysis, which can combine different plant traits and allow for a consideration of correlations between [...] Read more.
Deep learning techniques play an important role in plant phenotype research, due to their powerful data processing and modeling capabilities. Multi-task learning has been researched for plant phenotype analysis, which can combine different plant traits and allow for a consideration of correlations between multiple phenotypic features for more comprehensive analysis. In this paper, an intelligent and optimized multi-task learning method for the phenotypic analysis of Arabidopsis thaliana is proposed and studied. Based on the VGG16 network, hard parameter sharing and task-dependent uncertainty are used to weight the loss function of each task, allowing parameters associated with genotype classification, leaf number counting, and leaf area prediction tasks to be learned jointly. The experiments were conducted on the Arabidopsis thaliana dataset, and the proposed model achieved weighted classification accuracy, precision, and Fw scores of 96.88%, 97.50%, and 96.74%, respectively. Furthermore, the coefficient of determination R2 values in the leaf number and leaf area regression tasks reached 0.7944 and 0.9787, respectively. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Graphical abstract

23 pages, 1225 KiB  
Article
A Parallel Compact Gannet Optimization Algorithm for Solving Engineering Optimization Problems
by Jeng-Shyang Pan, Bing Sun, Shu-Chuan Chu, Minghui Zhu and Chin-Shiuh Shieh
Mathematics 2023, 11(2), 439; https://doi.org/10.3390/math11020439 - 13 Jan 2023
Cited by 16 | Viewed by 1566
Abstract
The Gannet Optimization Algorithm (GOA) has good performance, but there is still room for improvement in memory consumption and convergence. In this paper, an improved Gannet Optimization Algorithm is proposed to solve five engineering optimization problems. The compact strategy enables the GOA to [...] Read more.
The Gannet Optimization Algorithm (GOA) has good performance, but there is still room for improvement in memory consumption and convergence. In this paper, an improved Gannet Optimization Algorithm is proposed to solve five engineering optimization problems. The compact strategy enables the GOA to save a large amount of memory, and the parallel communication strategy allows the algorithm to avoid falling into local optimal solutions. We improve the GOA through the combination of parallel strategy and compact strategy, and we name the improved algorithm Parallel Compact Gannet Optimization Algorithm (PCGOA). The performance study of the PCGOA on the CEC2013 benchmark demonstrates the advantages of our new method in various aspects. Finally, the results of the PCGOA on solving five engineering optimization problems show that the improved algorithm can find the global optimal solution more accurately. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

24 pages, 3241 KiB  
Article
An Offline Weighted-Bagging Data-Driven Evolutionary Algorithm with Data Generation Based on Clustering
by Zongliang Guo, Sikai Lin, Runze Suo and Xinming Zhang
Mathematics 2023, 11(2), 431; https://doi.org/10.3390/math11020431 - 13 Jan 2023
Viewed by 1373
Abstract
In recent years, a variety of data-driven evolutionary algorithms (DDEAs) have been proposed to solve time-consuming and computationally intensive optimization problems. DDEAs are usually divided into offline DDEAs and online DDEAs, with offline DDEAs being the most widely studied and proven to display [...] Read more.
In recent years, a variety of data-driven evolutionary algorithms (DDEAs) have been proposed to solve time-consuming and computationally intensive optimization problems. DDEAs are usually divided into offline DDEAs and online DDEAs, with offline DDEAs being the most widely studied and proven to display excellent performance. However, most offline DDEAs suffer from three disadvantages. First, they require many surrogates to build a relatively accurate model, which is a process that is redundant and time-consuming. Second, when the available fitness evaluations are insufficient, their performance tends to be not entirely satisfactory. Finally, to cope with the second problem, many algorithms use data generation methods, which significantly increases the algorithm runtime. To overcome these problems, we propose a brand-new DDEA with radial basis function networks as its surrogates. First, we invented a fast data generation algorithm based on clustering to enlarge the dataset and reduce fitting errors. Then, we trained radial basis function networks and carried out adaptive design for their parameters. We then aggregated radial basis function networks using a unique model management framework and demonstrated its accuracy and stability. Finally, fitness evaluations were obtained and used for optimization. Through numerical experiments and comparisons with other algorithms, this algorithm has been proven to be an excellent DDEA that suits data optimization problems. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

20 pages, 4533 KiB  
Article
Dual-Population Adaptive Differential Evolution Algorithm L-NTADE
by Vladimir Stanovov, Shakhnaz Akhmedova and Eugene Semenkin
Mathematics 2022, 10(24), 4666; https://doi.org/10.3390/math10244666 - 09 Dec 2022
Cited by 9 | Viewed by 1384
Abstract
This study proposes a dual-population algorithmic scheme for differential evolution and specific mutation strategy. The first population contains the newest individuals, and is continuously updated, whereas the other keeps the top individuals throughout the whole search process. The proposed mutation strategy combines information [...] Read more.
This study proposes a dual-population algorithmic scheme for differential evolution and specific mutation strategy. The first population contains the newest individuals, and is continuously updated, whereas the other keeps the top individuals throughout the whole search process. The proposed mutation strategy combines information from both populations. The proposed L-NTADE algorithm (Linear population size reduction Newest and Top Adaptive Differential Evolution) follows the L-SHADE approach by utilizing its parameter adaptation scheme and linear population size reduction. The L-NTADE is tested on two benchmark sets, namely CEC 2017 and CEC 2022, and demonstrates highly competitive results compared to the state-of-the-art methods. The deeper analysis of the results shows that it displays different properties compared to known DE schemes. The simplicity of L-NTADE coupled with its high efficiency make it a promising approach. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

15 pages, 349 KiB  
Article
An Intrusion Detection System Based on Genetic Algorithm for Software-Defined Networks
by Xuejian Zhao, Huiying Su and Zhixin Sun
Mathematics 2022, 10(21), 3941; https://doi.org/10.3390/math10213941 - 24 Oct 2022
Cited by 5 | Viewed by 1150
Abstract
A SDN (Software-Defined Network) separates the control layer from the data layer to realize centralized network control and improve the scalability and the programmability. SDN also faces a series of security threats. An intrusion detection system (IDS) is an effective means of protecting [...] Read more.
A SDN (Software-Defined Network) separates the control layer from the data layer to realize centralized network control and improve the scalability and the programmability. SDN also faces a series of security threats. An intrusion detection system (IDS) is an effective means of protecting communication networks against traffic attacks. In this paper, a novel IDS model for SDN is proposed to collect and analyze the traffic which is generally at the control plane. Moreover, network congestion will occur when the amount of data transferred reaches the data processing capacity of the IDS. The suggested IDS model addresses this problem with a probability-based traffic sampling method in which the genetic algorithm (GA) is used to approach the sampling probability of each sampling point. According to the simulation results, the suggested IDS model based on GA is capable of enhancing the detection efficiency in SDNs. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

20 pages, 643 KiB  
Article
Efficient Ontology Meta-Matching Based on Interpolation Model Assisted Evolutionary Algorithm
by Xingsi Xue, Qi Wu, Miao Ye and Jianhui Lv
Mathematics 2022, 10(17), 3212; https://doi.org/10.3390/math10173212 - 05 Sep 2022
Cited by 1 | Viewed by 1166
Abstract
Ontology is the kernel technique of the Semantic Web (SW), which models the domain knowledge in a formal and machine-understandable way. To ensure different ontologies’ communications, the cutting-edge technology is to determine the heterogeneous entity mappings through the ontology matching process. During this [...] Read more.
Ontology is the kernel technique of the Semantic Web (SW), which models the domain knowledge in a formal and machine-understandable way. To ensure different ontologies’ communications, the cutting-edge technology is to determine the heterogeneous entity mappings through the ontology matching process. During this procedure, it is of utmost importance to integrate different similarity measures to distinguish heterogeneous entity correspondence. The way to find the most appropriate aggregating weights to enhance the ontology alignment’s quality is called ontology meta-matching problem, and recently, Evolutionary Algorithm (EA) has become a great methodology of addressing it. Classic EA-based meta-matching technique evaluates each individual through traversing the reference alignment, which increases the computational complexity and the algorithm’s running time. For overcoming this drawback, an Interpolation Model assisted EA (EA-IM) is proposed, which introduces the IM to predict the fitness value of each newly generated individual. In particular, we first divide the feasible region into several uniform sub-regions using lattice design method, and then precisely evaluate the Interpolating Individuals (INIDs). On this basis, an IM is constructed for each new individual to forecast its fitness value, with the help of its neighborhood. For testing EA-IM’s performance, we use the Ontology Alignment Evaluation Initiative (OAEI) Benchmark in the experiment and the final results show that EA-IM is capable of improving EA’s searching efficiency without sacrificing the solution’s quality, and the alignment’s f-measure values of EA-IM are better than OAEI’s participants. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Graphical abstract

18 pages, 602 KiB  
Article
Heterogeneous Information Network-Based Recommendation with Metapath Search and Memory Network Architecture Search
by Peisen Yuan, Yi Sun and Hengliang Wang
Mathematics 2022, 10(16), 2895; https://doi.org/10.3390/math10162895 - 12 Aug 2022
Viewed by 1213
Abstract
Recommendation systems are now widely used on the Internet. In recommendation systems, user preferences are predicted by the interaction of users with products, such as clicks or purchases. Usually, the heterogeneous information network is used to capture heterogeneous semantic information in data, which [...] Read more.
Recommendation systems are now widely used on the Internet. In recommendation systems, user preferences are predicted by the interaction of users with products, such as clicks or purchases. Usually, the heterogeneous information network is used to capture heterogeneous semantic information in data, which can be used to solve the sparsity problem and the cold-start problem. In a more complex heterogeneous information network, the types of nodes and edges are very large, so there are lots of types of metagraphs in a complex heterogeneous information network. At the same time, machine learning tasks on heterogeneous information networks have a large number of parameters and neural network architectures that need to be set artificially. The main goal is to find the optimal hyperparameter settings and neural network architectures for the performance of a task in the set of hyperparameter space. To address this problem, we propose a metapath search method for heterogeneous information networks based on a network architecture search, which can search for metapaths that are more suitable for different heterogeneous information networks and recommendation tasks. We conducted experiments on Amazon and Yelp datasets and compared the architecture settings obtained from an automatic search with manually set structures to verify the effectiveness of the algorithm. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

12 pages, 1378 KiB  
Article
PEGANs: Phased Evolutionary Generative Adversarial Networks with Self-Attention Module
by Yu Xue, Weinan Tong, Ferrante Neri and Yixia Zhang
Mathematics 2022, 10(15), 2792; https://doi.org/10.3390/math10152792 - 05 Aug 2022
Cited by 7 | Viewed by 1776
Abstract
Generative adversarial networks have made remarkable achievements in generative tasks. However, instability and mode collapse are still frequent problems. We improve the framework of evolutionary generative adversarial networks (E-GANs), calling it phased evolutionary generative adversarial networks (PEGANs), and adopt a self-attention module to [...] Read more.
Generative adversarial networks have made remarkable achievements in generative tasks. However, instability and mode collapse are still frequent problems. We improve the framework of evolutionary generative adversarial networks (E-GANs), calling it phased evolutionary generative adversarial networks (PEGANs), and adopt a self-attention module to improve upon the disadvantages of convolutional operations. During the training process, the discriminator will play against multiple generators simultaneously, where each generator adopts a different objective function as a mutation operation. Every time after the specified number of training iterations, the generator individuals will be evaluated and the best performing generator offspring will be retained for the next round of evolution. Based on this, the generator can continuously adjust the training strategy during training, and the self-attention module also enables the model to obtain the modeling ability of long-range dependencies. Experiments on two datasets showed that PEGANs improve the training stability and are competitive in generating high-quality samples. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

17 pages, 4282 KiB  
Article
A Compact Parallel Pruning Scheme for Deep Learning Model and Its Mobile Instrument Deployment
by Meng Li, Ming Zhao, Tie Luo, Yimin Yang and Sheng-Lung Peng
Mathematics 2022, 10(12), 2126; https://doi.org/10.3390/math10122126 - 18 Jun 2022
Cited by 2 | Viewed by 1378
Abstract
In the single pruning algorithm, channel pruning or filter pruning is used to compress the deep convolution neural network, and there are still many redundant parameters in the compressed model. Directly pruning the filter will largely cause the loss of key information and [...] Read more.
In the single pruning algorithm, channel pruning or filter pruning is used to compress the deep convolution neural network, and there are still many redundant parameters in the compressed model. Directly pruning the filter will largely cause the loss of key information and affect the accuracy of model classification. To solve these problems, a parallel pruning algorithm combined with image enhancement is proposed. Firstly, in order to improve the generalization ability of the model, a data enhancement method of random erasure is introduced. Secondly, according to the trained batch normalization layer scaling factor, the channels with small contribution are cut off, the model is initially thinned, and then the filters are pruned. By calculating the geometric median of the filters, redundant filters similar to them are found and pruned, and their similarity is measured by calculating the distance between filters. Pruning was done using VGG19 and DenseNet40 on cifar10 and cifar100 data sets. The experimental results show that this algorithm can improve the accuracy of the model, and at the same time, it can compress the calculation and parameters of the model to a certain extent. Finally, this method is applied in practice, and combined with transfer learning, traffic objects are classified and detected on the mobile phone. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

12 pages, 327 KiB  
Article
Matching Ontologies through Multi-Objective Evolutionary Algorithm with Relevance Matrix
by Hai Zhu, Xingsi Xue and Hongfeng Wang
Mathematics 2022, 10(12), 2077; https://doi.org/10.3390/math10122077 - 15 Jun 2022
Cited by 2 | Viewed by 1209
Abstract
The ultimate goal of semantic web (SW) is to implement mutual collaborations among ontology-based intelligent systems. To this end, it is necessary to integrate those domain-independent and cross-domain ontologies by finding the correspondences between their entities, which is the so-called ontology matching. To [...] Read more.
The ultimate goal of semantic web (SW) is to implement mutual collaborations among ontology-based intelligent systems. To this end, it is necessary to integrate those domain-independent and cross-domain ontologies by finding the correspondences between their entities, which is the so-called ontology matching. To improve the quality of ontology alignment, in this work, the ontology matching problem is first defined as a sparse multi-objective optimization problem (SMOOP), and then, a multi-objective evolutionary algorithm with a relevance matrix (MOEA-RM) is proposed to address it. In particular, a relevance matrix (RM) is presented to adaptively measure the relevance of each individual’s genes to the objectives, which is applied in MOEA’s initialization, crossover and mutation to ensure the population’s sparsity and to speed up the the algorithm’s convergence. The experiment verifies the performance of MOEA-RM by comparing it with the state-of-the-art ontology matching techniques, and the experimental results show that MOEA-RM is able to effectively address the ontology matching problem with different heterogeneity characteristics. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

21 pages, 3473 KiB  
Article
Vehicular-Network-Intrusion Detection Based on a Mosaic-Coded Convolutional Neural Network
by Rong Hu, Zhongying Wu, Yong Xu and Taotao Lai
Mathematics 2022, 10(12), 2030; https://doi.org/10.3390/math10122030 - 11 Jun 2022
Cited by 2 | Viewed by 1387
Abstract
With the development of Internet of Vehicles (IoV) technology, the car is no longer a closed individual. It exchanges information with an external network, communicating through the vehicle-mounted network (VMN), which, inevitably, gives rise to security problems. Attackers can intrude on the VMN, [...] Read more.
With the development of Internet of Vehicles (IoV) technology, the car is no longer a closed individual. It exchanges information with an external network, communicating through the vehicle-mounted network (VMN), which, inevitably, gives rise to security problems. Attackers can intrude on the VMN, using a wireless network or vehicle-mounted interface devices. To prevent such attacks, various intrusion-detection methods have been proposed, including convolutional neural network (CNN) ones. However, the existing CNN method was not able to best use the CNN’s capability, of extracting two-dimensional graph-like data, and, at the same time, to reflect the time connections among the sequential data. Therefore, this paper proposed a novel CNN model, based on two-dimensional Mosaic pattern coding, for anomaly detection. It can not only make full use of the ability of a CNN to extract grid data but also maintain the sequential time relationship of it. Simulations showed that this method could, effectively, distinguish attacks from the normal information on the vehicular network, improve the reliability of the system’s discrimination, and, at the same time, meet the real-time requirement of detection. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

22 pages, 2801 KiB  
Article
A New Method for Reconstructing Data Considering the Factor of Selected Provider Nodes Set in Distributed Storage System
by Miao Ye, Qinghao Zhang, Ruoyu Wei, Yong Wang and Xiaofang Deng
Mathematics 2022, 10(10), 1739; https://doi.org/10.3390/math10101739 - 19 May 2022
Viewed by 1135
Abstract
In the distributed storage system, when data need to be recovered after node failure, the erasure code redundancy method occupies less storage space than the multi-copy method. At present, the repair mechanism using erasure code to reconstruct the failed node only considers the [...] Read more.
In the distributed storage system, when data need to be recovered after node failure, the erasure code redundancy method occupies less storage space than the multi-copy method. At present, the repair mechanism using erasure code to reconstruct the failed node only considers the improvement of link bandwidth on the repair rate and does not consider the impact of the selection of data providing node-set on the repair performance. A single node fault data reconstruction method based on the Software Defined Network (SDN) using the erasure code method is designed to solve the above problems. This method collects the network link-state through SDN, establishes a multi-attribute decision-making model of the data providing node-set based on the node performance, and determines the data providing nodes participating in providing data through the ideal point method. Then, the data recovery problem of a single fault node is modeled as the optimization problem of an optimal repair tree, and a hybrid genetic algorithm is designed to solve it. The experimental results show that under the same erasure code scale, after selecting the nodes of the data providing node-set, compared with the traditional tree topology and star topology, the repair delay distribution of the designed single fault node repair method for a distributed storage system is reduced by 15% and 45% respectively, and the repair flow is close to the star topology, which is reduced by 40% compared with the traditional tree repair. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

24 pages, 1292 KiB  
Article
Enhancing Firefly Algorithm with Dual-Population Topology Coevolution
by Wei Li, Wangdong Li and Ying Huang
Mathematics 2022, 10(9), 1564; https://doi.org/10.3390/math10091564 - 06 May 2022
Cited by 7 | Viewed by 1302
Abstract
The firefly algorithm (FA) is a meta-heuristic swarm intelligence optimization algorithm. It simulates the social behavior of fireflies with their flash and attraction characteristics. Numerous researches showed that FA can successfully deal with some problems. However, too many attractions between the fireflies may [...] Read more.
The firefly algorithm (FA) is a meta-heuristic swarm intelligence optimization algorithm. It simulates the social behavior of fireflies with their flash and attraction characteristics. Numerous researches showed that FA can successfully deal with some problems. However, too many attractions between the fireflies may result in high computational complexity, slow convergence, low solution accuracy and poor algorithm stability. To overcome these issues, this paper proposes an enhanced firefly algorithm with dual-population topology coevolution (DPTCFA). In DPTCFA, to maintain population diversity, a dual-population topology coevolution mechanism consisting of the scale-free and ring network topology is proposed. The scale-free network topology structure conforms to the distribution law between the optimal and potential individuals, and the ring network topology effectively reduces the attractions, and thereby has a low computational complexity. The Gauss map strategy is introduced in the scale-free network topology population to lower parameter sensitivity, and in the ring network topology population, a new distance strategy based on dimension difference is adopted to speed up the convergence. This paper improves a diversity neighborhood enhanced search strategy for firefly position update to increase the solution quality. In order to balance the exploration and exploitation, a staged balance mechanism is designed to enhance the algorithm stability. Finally, the performance of the proposed algorithm is verified via several well-known benchmark functions. Experiment results show that DPTCFA can efficiently improve the existing problems of FA to obtain better solutions. Full article
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)
Show Figures

Figure 1

Back to TopTop