Algorithms for Natural Computing Models

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms and Mathematical Models for Computer-Assisted Diagnostic Systems".

Deadline for manuscript submissions: closed (15 December 2023) | Viewed by 19708

Special Issue Editors


E-Mail Website
Guest Editor
Services and Cybersecurity Group, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands
Interests: symmetric cryptography; evolutionary computation; cellular automata; boolean functions; combinatorial designs
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Geosciences, Università degli Studi di Trieste, Piazzale Europa, 34127 Trieste, Italy
Interests: evolutionary computation; genetic programming; genetic algorithm; cellular automata; reaction systems; discrete dynamical systems; bio-inspired computational models

Special Issue Information

Dear Colleagues,

Natural computing is a large area of research that involved both theoretical investigations and practical applications. Nature has always been a source of inspiration for the development of computational models, such as cellular automata, evolutionary computation, neural networks, and membrane computing to name but a few.

In this Special Issue, we want to collect the most recent works on natural computing models used either as a model for computation or as a tool for optimization. The former involves models like cellular automata, P systems, and reaction systems. All of them are characterized by inherent parallelism. The latter involves all evolutionary computation techniques (e.g., genetic algorithms, genetic programming, evolutionary strategies), swarm intelligence methods (e.g., particle swarm optimization, ant colony optimization), and connectivist models (e.g., neural networks in all their incarnations).

We invite contributions that explore the theoretical and practical aspects of natural computing. This includes (but is not limited to) research in the following areas:

  • The computational power of natural computing models and their ability to solve intractable problems efficiently;
  • Theoretical aspects and definition of new or improved evolutionary or swarm intelligence techniques;
  • Application of natural computing in the modeling and simulation of complex systems;
  • Practical applications of natural computing methods in the areas of health, earth sciences, biology, cryptography, security, social sciences, and economics.

Dr. Luca Mariot
Dr. Luca Manzoni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • natural computing
  • evolutionary computation
  • cellular automata
  • bio-inspired computational models
  • unconventional computing
  • genetic algorithms
  • swarm intelligence
  • computational intelligence
  • neural networks
  • membrane computing
  • reaction systems

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

36 pages, 2020 KiB  
Article
Optimizing Physics-Informed Neural Network in Dynamic System Simulation and Learning of Parameters
by Ebenezer O. Oluwasakin and Abdul Q. M. Khaliq
Algorithms 2023, 16(12), 547; https://doi.org/10.3390/a16120547 - 28 Nov 2023
Viewed by 2010
Abstract
Artificial neural networks have changed many fields by giving scientists a strong way to model complex phenomena. They are also becoming increasingly useful for solving various difficult scientific problems. Still, people keep trying to find faster and more accurate ways to simulate dynamic [...] Read more.
Artificial neural networks have changed many fields by giving scientists a strong way to model complex phenomena. They are also becoming increasingly useful for solving various difficult scientific problems. Still, people keep trying to find faster and more accurate ways to simulate dynamic systems. This research explores the transformative capabilities of physics-informed neural networks, a specialized subset of artificial neural networks, in modeling complex dynamical systems with enhanced speed and accuracy. These networks incorporate known physical laws into the learning process, ensuring predictions remain consistent with fundamental principles, which is crucial when dealing with scientific phenomena. This study focuses on optimizing the application of this specialized network for simultaneous system dynamics simulations and learning time-varying parameters, particularly when the number of unknowns in the system matches the number of undetermined parameters. Additionally, we explore scenarios with a mismatch between parameters and equations, optimizing network architecture to enhance convergence speed, computational efficiency, and accuracy in learning the time-varying parameter. Our approach enhances the algorithm’s performance and accuracy, ensuring optimal use of computational resources and yielding more precise results. Extensive experiments are conducted on four different dynamical systems: first-order irreversible chain reactions, biomass transfer, the Brusselsator model, and the Lotka-Volterra model, using synthetically generated data to validate our approach. Additionally, we apply our method to the susceptible-infected-recovered model, utilizing real-world COVID-19 data to learn the time-varying parameters of the pandemic’s spread. A comprehensive comparison between the performance of our approach and fully connected deep neural networks is presented, evaluating both accuracy and computational efficiency in parameter identification and system dynamics capture. The results demonstrate that the physics-informed neural networks outperform fully connected deep neural networks in performance, especially with increased network depth, making them ideal for real-time complex system modeling. This underscores the physics-informed neural network’s effectiveness in scientific modeling in scenarios with balanced unknowns and parameters. Furthermore, it provides a fast, accurate, and efficient alternative for analyzing dynamic systems. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

23 pages, 683 KiB  
Article
Discovering Non-Linear Boolean Functions by Evolving Walsh Transforms with Genetic Programming
by Luigi Rovito, Andrea De Lorenzo and Luca Manzoni
Algorithms 2023, 16(11), 499; https://doi.org/10.3390/a16110499 - 27 Oct 2023
Viewed by 1338
Abstract
Stream ciphers usually rely on highly secure Boolean functions to ensure safe communication within unsafe channels. However, discovering secure Boolean functions is a non-trivial optimization problem that has been addressed by many optimization techniques: in particular by evolutionary algorithms. We investigate in this [...] Read more.
Stream ciphers usually rely on highly secure Boolean functions to ensure safe communication within unsafe channels. However, discovering secure Boolean functions is a non-trivial optimization problem that has been addressed by many optimization techniques: in particular by evolutionary algorithms. We investigate in this article the employment of Genetic Programming (GP) for evolving Boolean functions with large non-linearity by examining the search space consisting of Walsh transforms. Especially, we build generic Walsh spectra starting from the evolution of Walsh transform coefficients. Then, by leveraging spectral inversion, we build pseudo-Boolean functions from which we are able to determine the corresponding nearest Boolean functions, whose computation involves filling via a random criterion a certain amount of “uncertain” positions in the final truth table. We show that by using a balancedness-preserving strategy, it is possible to exploit those positions to obtain a function that is as balanced as possible. We perform experiments by comparing different types of symbolic representations for the Walsh transform, and we analyze the percentage of uncertain positions. We systematically review the outcomes of these comparisons to highlight the best type of setting for this problem. We evolve Boolean functions from 6 to 16 bits and compare the GP-based evolution with random search to show that evolving Walsh transforms leads to highly non-linear functions that are balanced as well. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

18 pages, 896 KiB  
Article
Generating Loop Patterns with a Genetic Algorithm and a Probabilistic Cellular Automata Rule
by Rolf Hoffmann
Algorithms 2023, 16(7), 352; https://doi.org/10.3390/a16070352 - 24 Jul 2023
Viewed by 1013
Abstract
The objective is to find a Cellular Automata (CA) rule that can generate “loop patterns”. A loop pattern is given by ones on a zero background showing loops. In order to find out how loop patterns can be locally defined, tentative loop patterns [...] Read more.
The objective is to find a Cellular Automata (CA) rule that can generate “loop patterns”. A loop pattern is given by ones on a zero background showing loops. In order to find out how loop patterns can be locally defined, tentative loop patterns are generated by a genetic algorithm in a preliminary stage. A set of local matching tiles is designed and checked whether they can produce the aimed loop patterns by the genetic algorithm. After having approved a certain set of tiles, a probabilistic CA rule is designed in a methodical way. Templates are derived from the tiles, which then are used in the CA rule for matching. In order to drive the evolution to the desired patterns, noise is injected if the templates do not match or other constraints are not fulfilled. Simulations illustrate that loops and connected loops can be evolved by the CA rule. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

20 pages, 813 KiB  
Article
Three Metaheuristic Approaches for Tumor Phylogeny Inference: An Experimental Comparison
by Simone Ciccolella, Gianluca Della Vedova, Vladimir Filipović and Mauricio Soto Gomez
Algorithms 2023, 16(7), 333; https://doi.org/10.3390/a16070333 - 12 Jul 2023
Viewed by 993
Abstract
Being able to infer the clonal evolution and progression of cancer makes it possible to devise targeted therapies to treat the disease. As discussed in several studies, understanding the history of accumulation and the evolution of mutations during cancer progression is of key [...] Read more.
Being able to infer the clonal evolution and progression of cancer makes it possible to devise targeted therapies to treat the disease. As discussed in several studies, understanding the history of accumulation and the evolution of mutations during cancer progression is of key importance when devising treatment strategies. Given the importance of the task, many methods for phylogeny reconstructions have been developed over the years, mostly employing probabilistic frameworks. Our goal was to explore different methods to take on this phylogeny inference problem; therefore, we devised and implemented three different metaheuristic approaches—Particle Swarm Optimization (PSO), Genetic Programming (GP) and Variable Neighbourhood Search (VNS)—under the Perfect Phylogeny and the Dollo-k evolutionary models. We adapted the algorithms to be applied to this specific context, specifically to a tree-based search space, and proposed six different experimental settings, in increasing order of difficulty, to test the novel methods amongst themselves and against a state-of-the-art method. Of the three, the PSO shows particularly promising results and is comparable to published tools, even at this exploratory stage. Thus, we foresee great improvements if alternative definitions of distance and velocity in a tree space, capable of better handling such non-Euclidean search spaces, are devised in future works. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

23 pages, 1093 KiB  
Article
Evolving Dispatching Rules for Dynamic Vehicle Routing with Genetic Programming
by Domagoj Jakobović, Marko Đurasević, Karla Brkić, Juraj Fosin, Tonči Carić and Davor Davidović
Algorithms 2023, 16(6), 285; https://doi.org/10.3390/a16060285 - 01 Jun 2023
Cited by 4 | Viewed by 1404
Abstract
Many real-world applications of the vehicle routing problem (VRP) are arising today, which range from physical resource planning to virtual resource management in the cloud computing domain. A common trait of these applications is usually the large scale size of problem instances, which [...] Read more.
Many real-world applications of the vehicle routing problem (VRP) are arising today, which range from physical resource planning to virtual resource management in the cloud computing domain. A common trait of these applications is usually the large scale size of problem instances, which require fast algorithms to generate solutions of acceptable quality. The basis for many VRP approaches is a heuristic which builds a candidate solution that may subsequently be improved by a local search procedure. Since there are many variants of the basic VRP model, specialised algorithms must be devised that take into account specific constraints and user-defined objective measures. Another factor is that the scheduling process may be carried out in dynamic conditions, where future information may be uncertain or unavailable or may be subject to change. When all of this is considered, there is a need for customised heuristics, devised for a specific problem variant, that could be used in highly dynamic environments. In this paper, we use genetic programming (GP) to evolve a suitable dispatching rule to build solutions for different objectives and classes of VRP problems, applicable in both dynamic and stochastic conditions. The results show great potential, since this method may be used for different problem classes and user-defined performance objectives. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

15 pages, 1245 KiB  
Article
Synchronization, Control and Data Assimilation of the Lorenz System
by Franco Bagnoli and Michele Baia
Algorithms 2023, 16(4), 213; https://doi.org/10.3390/a16040213 - 19 Apr 2023
Viewed by 1153
Abstract
We explore several aspects of replica synchronization with the goal of retrieving the values of parameters applied to the Lorenz system. The idea is to establish a computer replica (slave) of a natural system (master, simulated in this paper), and exploit the fact [...] Read more.
We explore several aspects of replica synchronization with the goal of retrieving the values of parameters applied to the Lorenz system. The idea is to establish a computer replica (slave) of a natural system (master, simulated in this paper), and exploit the fact that the slave synchronizes with the master only if they evolve with the same parameters. As a byproduct, in the synchronized phase, the state variables of the slave and those of the master are the same, thus allowing us to perform measurements that would be impossible in the real system. We review some aspects of master–slave synchronization using a subset of variables with intermittent coupling. We show how synchronization can be achieved when some of the state variables are available for direct measurement using a simulated annealing approach, and also when they are accessible only through a scalar function, using a pruned-enriching ensemble approach, similar to genetic algorithms without cross-over. We also explore the case of exploiting the “gene exchange” option among members of the ensemble. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

16 pages, 2052 KiB  
Article
The Need for Speed: A Fast Guessing Entropy Calculation for Deep Learning-Based SCA
by Guilherme Perin, Lichao Wu and Stjepan Picek
Algorithms 2023, 16(3), 127; https://doi.org/10.3390/a16030127 - 23 Feb 2023
Viewed by 1428
Abstract
The adoption of deep neural networks for profiling side-channel attacks opened new perspectives for leakage detection. Recent publications showed that cryptographic implementations featuring different countermeasures could be broken without feature selection or trace preprocessing. This success comes with a high price: an extensive [...] Read more.
The adoption of deep neural networks for profiling side-channel attacks opened new perspectives for leakage detection. Recent publications showed that cryptographic implementations featuring different countermeasures could be broken without feature selection or trace preprocessing. This success comes with a high price: an extensive hyperparameter search to find optimal deep learning models. As deep learning models usually suffer from overfitting due to their high fitting capacity, it is crucial to avoid over-training regimes, which require a correct number of epochs. For that, early stopping is employed as an efficient regularization method that requires a consistent validation metric. Although guessing entropy is a highly informative metric for profiling side-channel attacks, it is time-consuming, especially if computed for all epochs during training, and the number of validation traces is significantly large. This paper shows that guessing entropy can be efficiently computed during training by reducing the number of validation traces without affecting the efficiency of early stopping decisions. Our solution significantly speeds up the process, impacting the performance of the hyperparameter search and overall profiling attack. Our fast guessing entropy calculation is up to 16× faster, resulting in more hyperparameter tuning experiments and allowing security evaluators to find more efficient deep learning models. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

15 pages, 720 KiB  
Article
Do Neural Transformers Learn Human-Defined Concepts? An Extensive Study in Source Code Processing Domain
by Claudio Ferretti and Martina Saletta
Algorithms 2022, 15(12), 449; https://doi.org/10.3390/a15120449 - 29 Nov 2022
Cited by 5 | Viewed by 1694
Abstract
State-of-the-art neural networks build an internal model of the training data, tailored to a given classification task. The study of such a model is of interest, and therefore, research on explainable artificial intelligence (XAI) aims at investigating if, in the internal states of [...] Read more.
State-of-the-art neural networks build an internal model of the training data, tailored to a given classification task. The study of such a model is of interest, and therefore, research on explainable artificial intelligence (XAI) aims at investigating if, in the internal states of a network, it is possible to identify rules that associate data to their corresponding classification. This work moves toward XAI research on neural networks trained in the classification of source code snippets, in the specific domain of cybersecurity. In this context, typically, textual instances have firstly to be encoded with non-invertible transformation into numerical vectors to feed the models, and this limits the applicability of known XAI methods based on the differentiation of neural signals with respect to real valued instances. In this work, we start from the known TCAV method, designed to study the human understandable concepts that emerge in the internal layers of a neural network, and we adapt it to transformers architectures trained in solving source code classification problems. We first determine domain-specific concepts (e.g., the presence of given patterns in the source code), and for each concept, we train support vector classifiers to separate points in the vector activation spaces that represent input instances with the concept from those without the concept. Then, we study if the presence (or the absence) of such concepts affects the decision process of the neural network. Finally, we discuss about how our approach contributes to general XAI goals and we suggest specific applications in the source code analysis field. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

16 pages, 2020 KiB  
Article
An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Convolutional Neural Networks
by Raz Lapid, Zvika Haramaty and Moshe Sipper
Algorithms 2022, 15(11), 407; https://doi.org/10.3390/a15110407 - 31 Oct 2022
Cited by 1 | Viewed by 2483
Abstract
Deep neural networks (DNNs) are sensitive to adversarial data in a variety of scenarios, including the black-box scenario, where the attacker is only allowed to query the trained model and receive an output. Existing black-box methods for creating adversarial instances are costly, often [...] Read more.
Deep neural networks (DNNs) are sensitive to adversarial data in a variety of scenarios, including the black-box scenario, where the attacker is only allowed to query the trained model and receive an output. Existing black-box methods for creating adversarial instances are costly, often using gradient estimation or training a replacement network. This paper introduces Query-Efficient Evolutionary AttackQuEry Attack—an untargeted, score-based, black-box attack. QuEry Attack is based on a novel objective function that can be used in gradient-free optimization problems. The attack only requires access to the output logits of the classifier and is thus not affected by gradient masking. No additional information is needed, rendering our method more suitable to real-life situations. We test its performance with three different, commonly used, pretrained image-classifications models—Inception-v3, ResNet-50, and VGG-16-BN—against three benchmark datasets: MNIST, CIFAR10 and ImageNet. Furthermore, we evaluate QuEry Attack’s performance on non-differential transformation defenses and robust models. Our results demonstrate the superior performance of QuEry Attack, both in terms of accuracy score and query efficiency. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

10 pages, 879 KiB  
Article
High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms
by Moshe Sipper
Algorithms 2022, 15(9), 315; https://doi.org/10.3390/a15090315 - 02 Sep 2022
Cited by 5 | Viewed by 2785
Abstract
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein [...] Read more.
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. However, just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms, we should not expect considerable gains from hyperparameter tuning on average; however, there may be some datasets for which default hyperparameters perform poorly, especially for some algorithms. By defining a single hp_score value, which combines an algorithm’s accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study shall serve ML practitioners at large. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

Back to TopTop