Evolutionary Algorithms in Intelligent Systems

A special issue of Mathematics (ISSN 2227-7390).

Deadline for manuscript submissions: closed (30 August 2020) | Viewed by 32524

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy
Interests: algorithms on strings; formal languages; automata theory; code theory; word combinatorics

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy
Interests: online evolutionary algorithms; metaheuristic for combinatorial optimization; discrete differential evolution; semantic proximity measures; planning agents and complex network dynamics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy
Interests: evolutionary computation; evolutionary algorithms in neural networks and machine learning; neuroevolution; sentiment analysis; distributed autonomous agents; automated planning

Special Issue Information

Dear Colleagues,

Evolutionary algorithms and metaheuristics are widely used to provide efficient and effective approximate solutions to computationally hard optimization problems. Successful early applications of the evolutionary computational approach can be found in the field of numerical optimization, while they have now become pervasive in applications for planning, scheduling, transportation and logistics, vehicle routing, packing problems, etc. With the widespread use of intelligent systems in recent years, evolutionary algorithms have been applied, beyond classical optimization problems, as components of intelligent systems for supporting tasks and decisions in the fields of machine vision, natural language processing, parameters optimization for neural networks (neuroevolution), and features selection in machine learning systems. Moreover, they are also applied in areas like complex networks dynamics, evolution and trend detection in social networks, emergent behavior in multiagent systems and adaptive evolutionary user interfaces, to mention a few. In these systems, the evolutionary components are integrated in the overall architecture and they provide services, e.g., pattern matching services, to the specific algorithmic solutions.

The aim of this Special Issue is to bring together recent theoretical and applicative research advancements in the area of evolutionary algorithms as components of intelligent systems, with a focus on solutions and methodologies that can be reused to solve sub-classes of problems recurring in intelligent applications.

Contributions are welcome on theoretical models and applications to intelligent systems of evolutionary algorithms for, but not limited to, single-objective and multi-objectives optimization, numerical continuous non-linear optimization, combinatorial optimization, graph matching and pattern matching, agents and automata optimization. Evolutionary paradigms to be considered, non-exhaustively, include continuous and discrete differential evolution, genetic algorithms, memetic and foraging schemes, online evolutionary algorithms, genetic programming, co-evolution mechanisms, artificial immune systems, swarm based approaches, ant colony optimization, and, more generally, nature and bio-inspired meta-heuristics.

The selection criteria will be primarily based on the formal and technical soundness, the experimental support and the relevance of the contribution and its impact on re-usability of the results for solving subgroup of problems recurring in a class of intelligent applications.

Prof. Dr. Arturo Carpi
Prof. Dr. Alfredo Milani
Dr. Valentina Poggioni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Evolutionary components in artificial intelligent systems
  • Evolutionary machine vision
  • Evolutionary approaches to combinatorial optimization
  • Formal methods in evolutionary algorithms
  • Evolutionary algorithms in machine learning
  • Evolutionary agents and planning
  • Evolutionary hardware and robotics
  • Evolutionary schemes for crowd modeling and management
  • Evolutionary approaches to social network dynamics
  • Neuroevolution and evolutionary algorithms in neural networks
  • Evolutionary strategies for cybersecurity
  • Evolutionary human-machine and machine-machine interfaces

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 135 KiB  
Editorial
Evolutionary Algorithms in Intelligent Systems
by Alfredo Milani
Mathematics 2020, 8(10), 1733; https://doi.org/10.3390/math8101733 - 10 Oct 2020
Cited by 2 | Viewed by 1433
Abstract
Evolutionary algorithms and metaheuristics are widely used to provide efficient and effective approximate solutions to computationally difficult optimization problems [...] Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)

Research

Jump to: Editorial

16 pages, 545 KiB  
Article
Differential Evolution for Neural Networks Optimization
by Marco Baioletti, Gabriele Di Bari, Alfredo Milani and Valentina Poggioni
Mathematics 2020, 8(1), 69; https://doi.org/10.3390/math8010069 - 02 Jan 2020
Cited by 36 | Viewed by 5260
Abstract
In this paper, a Neural Networks optimizer based on Self-adaptive Differential Evolution is presented. This optimizer applies mutation and crossover operators in a new way, taking into account the structure of the network according to a per layer strategy. Moreover, a new crossover [...] Read more.
In this paper, a Neural Networks optimizer based on Self-adaptive Differential Evolution is presented. This optimizer applies mutation and crossover operators in a new way, taking into account the structure of the network according to a per layer strategy. Moreover, a new crossover called interm is proposed, and a new self-adaptive version of DE called MAB-ShaDE is suggested to reduce the number of parameters. The framework has been tested on some well-known classification problems and a comparative study on the various combinations of self-adaptive methods, mutation, and crossover operators available in literature is performed. Experimental results show that DENN reaches good performances in terms of accuracy, better than or at least comparable with those obtained by backpropagation. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
Show Figures

Figure 1

12 pages, 226 KiB  
Article
Dynamic Parallel Mining Algorithm of Association Rules Based on Interval Concept Lattice
by Yafeng Yang, Ru Zhang and Baoxiang Liu
Mathematics 2019, 7(7), 647; https://doi.org/10.3390/math7070647 - 19 Jul 2019
Cited by 2 | Viewed by 1927
Abstract
An interval concept lattice is an expansion form of a classical concept lattice and a rough concept lattice. It is a conceptual hierarchy consisting of a set of objects with a certain number or proportion of intent attributes. Interval concept lattices refine the [...] Read more.
An interval concept lattice is an expansion form of a classical concept lattice and a rough concept lattice. It is a conceptual hierarchy consisting of a set of objects with a certain number or proportion of intent attributes. Interval concept lattices refine the proportion of intent containing extent to get a certain degree of object set, and then mine association rules, so as to achieve minimal cost and maximal return. Faced with massive data, the structure of an interval concept lattice is more complex. Even if the lattice structures have been united first, the time complexity of mining interval association rules is higher. In this paper, the principle of mining association rules with parameters is studied, and the principle of a vertical union algorithm of interval association rules is proposed. On this basis, a dynamic mining algorithm of interval association rules is designed to achieve rule aggregation and maintain the diversity of interval association rules. Finally, the rationality and efficiency of the algorithm are verified by a case study. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
19 pages, 1587 KiB  
Article
On the Efficacy of Ensemble of Constraint Handling Techniques in Self-Adaptive Differential Evolution
by Hassan Javed, Muhammad Asif Jan, Nasser Tairan, Wali Khan Mashwani, Rashida Adeeb Khanum, Muhammad Sulaiman, Hidayat Ullah Khan and Habib Shah
Mathematics 2019, 7(7), 635; https://doi.org/10.3390/math7070635 - 17 Jul 2019
Cited by 12 | Viewed by 3218
Abstract
Self-adaptive variants of evolutionary algorithms (EAs) tune their parameters on the go by learning from the search history. Adaptive differential evolution with optional external archive (JADE) and self-adaptive differential evolution (SaDE) are two well-known self-adaptive versions of differential evolution (DE). They are both [...] Read more.
Self-adaptive variants of evolutionary algorithms (EAs) tune their parameters on the go by learning from the search history. Adaptive differential evolution with optional external archive (JADE) and self-adaptive differential evolution (SaDE) are two well-known self-adaptive versions of differential evolution (DE). They are both unconstrained search and optimization algorithms. However, if some constraint handling techniques (CHTs) are incorporated in their frameworks, then they can be used to solve constrained optimization problems (COPs). In an early work, an ensemble of constraint handling techniques (ECHT) is probabilistically hybridized with the basic version of DE. The ECHT consists of four different CHTs: superiority of feasible solutions, self-adaptive penalty, ε -constraint handling technique and stochastic ranking. This paper employs ECHT in the selection schemes, where offspring competes with their parents for survival to the next generation, of JADE and SaDE. As a result, JADE-ECHT and SaDE-ECHT are developed, which are the constrained variants of JADE and SaDE. Both algorithms are tested on 24 COPs and the experimental results are collected and compared according to algorithms’ evaluation criteria of CEC’06. Their comparison, in terms of feasibility rate (FR) and success rate (SR), shows that SaDE-ECHT surpasses JADE-ECHT in terms of FR, while JADE-ECHT outperforms SaDE-ECHT in terms of SR. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
Show Figures

Figure 1

13 pages, 888 KiB  
Article
Memes Evolution in a Memetic Variant of Particle Swarm Optimization
by Umberto Bartoccini, Arturo Carpi, Valentina Poggioni and Valentino Santucci
Mathematics 2019, 7(5), 423; https://doi.org/10.3390/math7050423 - 11 May 2019
Cited by 9 | Viewed by 3889
Abstract
In this work, a coevolving memetic particle swarm optimization (CoMPSO) algorithm is presented. CoMPSO introduces the memetic evolution of local search operators in particle swarm optimization (PSO) continuous/discrete hybrid search spaces. The proposed solution allows one to overcome the rigidity of uniform local [...] Read more.
In this work, a coevolving memetic particle swarm optimization (CoMPSO) algorithm is presented. CoMPSO introduces the memetic evolution of local search operators in particle swarm optimization (PSO) continuous/discrete hybrid search spaces. The proposed solution allows one to overcome the rigidity of uniform local search strategies when applied to PSO. The key contribution is that memes provides each particle of a PSO scheme with the ability to adapt its exploration dynamics to the local characteristics of the search space landscape. The objective is obtained by an original hybrid continuous/discrete meme representation and a probabilistic co-evolving PSO scheme for discrete, continuous, or hybrid spaces. The coevolving memetic PSO evolves both the solutions and their associated memes, i.e. the local search operators. The proposed CoMPSO approach has been experimented on a standard suite of numerical optimization benchmark problems. Preliminary experimental results show that CoMPSO is competitive with respect to standard PSO and other memetic PSO schemes in literature, and its a promising starting point for further research in adaptive PSO local search operators. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
Show Figures

Figure 1

15 pages, 945 KiB  
Article
Optimal Task Allocation in Wireless Sensor Networks by Means of Social Network Optimization
by Alessandro Niccolai, Francesco Grimaccia, Marco Mussetta and Riccardo Zich
Mathematics 2019, 7(4), 315; https://doi.org/10.3390/math7040315 - 28 Mar 2019
Cited by 20 | Viewed by 2504
Abstract
Wireless Sensor Networks (WSN) have been widely adopted for years, but their role is growing significantly currently with the increase of the importance of the Internet of Things paradigm. Moreover, since the computational capability of small-sized devices is also increasing, WSN are now [...] Read more.
Wireless Sensor Networks (WSN) have been widely adopted for years, but their role is growing significantly currently with the increase of the importance of the Internet of Things paradigm. Moreover, since the computational capability of small-sized devices is also increasing, WSN are now capable of performing relevant operations. An optimal scheduling of these in-network processes can affect both the total computational time and the energy requirements. Evolutionary optimization techniques can address this problem successfully due to their capability to manage non-linear problems with many design variables. In this paper, an evolutionary algorithm recently developed, named Social Network Optimization (SNO), has been applied to the problem of task allocation in a WSN. The optimization results on two test cases have been analyzed: in the first one, no energy constraints have been added to the optimization, while in the second one, a minimum number of life cycles is imposed. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
Show Figures

Figure 1

25 pages, 2493 KiB  
Article
What Can We Learn from Multi-Objective Meta-Optimization of Evolutionary Algorithms in Continuous Domains?
by Roberto Ugolotti, Laura Sani and Stefano Cagnoni
Mathematics 2019, 7(3), 232; https://doi.org/10.3390/math7030232 - 04 Mar 2019
Cited by 10 | Viewed by 2433
Abstract
Properly configuring Evolutionary Algorithms (EAs) is a challenging task made difficult by many different details that affect EAs’ performance, such as the properties of the fitness function, time and computational constraints, and many others. EAs’ meta-optimization methods, in which a metaheuristic is used [...] Read more.
Properly configuring Evolutionary Algorithms (EAs) is a challenging task made difficult by many different details that affect EAs’ performance, such as the properties of the fitness function, time and computational constraints, and many others. EAs’ meta-optimization methods, in which a metaheuristic is used to tune the parameters of another (lower-level) metaheuristic which optimizes a given target function, most often rely on the optimization of a single property of the lower-level method. In this paper, we show that by using a multi-objective genetic algorithm to tune an EA, it is possible not only to find good parameter sets considering more objectives at the same time but also to derive generalizable results which can provide guidelines for designing EA-based applications. In particular, we present a general framework for multi-objective meta-optimization, to show that “going multi-objective” allows one to generate configurations that, besides optimally fitting an EA to a given problem, also perform well on previously unseen ones. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
Show Figures

Figure 1

11 pages, 1066 KiB  
Article
Dynamic Horizontal Union Algorithm for Multiple Interval Concept Lattices
by Yafeng Yang, Ru Zhang and Baoxiang Liu
Mathematics 2019, 7(2), 159; https://doi.org/10.3390/math7020159 - 10 Feb 2019
Cited by 2 | Viewed by 2081
Abstract
In the era of big data, the data is updating in real-time. How to prepare the data accurately and efficiently is the key to mining association rules. In view of the above questions, this paper proposes a dynamic horizontal union algorithm of multiple [...] Read more.
In the era of big data, the data is updating in real-time. How to prepare the data accurately and efficiently is the key to mining association rules. In view of the above questions, this paper proposes a dynamic horizontal union algorithm of multiple interval concept lattices under the same background of the different attribute set and object set. First, in order to ensure the integrity of the lattice structure, the interval concept lattice incremental generation algorithm was improved, and then interval concept was divided into existing concept, redundancy concept and empty concept. Secondly, combining the characteristics of the interval concept lattice, the concept of consistency of interval concept lattice was defined and it is necessary and sufficient for the horizontal union of the lattice structure. Further, the interval concepts united were discussed, and the principle of horizontal unions was given. Finally, the sequence was scanned by the traversal method. This method increased the efficiency of horizontal union. A case study shows the feasibility and efficiency of the proposed algorithm. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
Show Figures

Figure 1

16 pages, 1058 KiB  
Article
A Multi-Objective Particle Swarm Optimization Algorithm Based on Gaussian Mutation and an Improved Learning Strategy
by Ying Sun and Yuelin Gao
Mathematics 2019, 7(2), 148; https://doi.org/10.3390/math7020148 - 04 Feb 2019
Cited by 27 | Viewed by 8697
Abstract
Obtaining high convergence and uniform distributions remains a major challenge in most metaheuristic multi-objective optimization problems. In this article, a novel multi-objective particle swarm optimization (PSO) algorithm is proposed based on Gaussian mutation and an improved learning strategy. The approach adopts a Gaussian [...] Read more.
Obtaining high convergence and uniform distributions remains a major challenge in most metaheuristic multi-objective optimization problems. In this article, a novel multi-objective particle swarm optimization (PSO) algorithm is proposed based on Gaussian mutation and an improved learning strategy. The approach adopts a Gaussian mutation strategy to improve the uniformity of external archives and current populations. To improve the global optimal solution, different learning strategies are proposed for non-dominated and dominated solutions. An indicator is presented to measure the distribution width of the non-dominated solution set, which is produced by various algorithms. Experiments were performed using eight benchmark test functions. The results illustrate that the multi-objective improved PSO algorithm (MOIPSO) yields better convergence and distributions than the other two algorithms, and the distance width indicator is reasonable and effective. Full article
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)
Show Figures

Figure 1

Back to TopTop