Neural Networks and Learning Systems II

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 13950

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering and Mathematics, University of Siena, Siena, Italy
Interests: bifurcation; memristor circuits; memristors; chaos; nonlinear dynamical systems; oscillators; Chua's circuit; Hopfield neural nets; Lyapunov methods; asymptotic stability; cellular neural nets; convergence; coupled circuits; hysteresis; piecewise linear techniques; stability; synchronization; time-varying networks; neural nets; circuit stability
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering, University of Florence, Via Santa Marta 3, 50139 Firenze, Italy
Interests: complex networks; control systems, robotics and automation; nonlinear dynamics

E-Mail Website
Guest Editor
Department of Electronics and Telecommunications Politecnico di Torino Corso Duca Degli Abruzzi, 24, 10129 Torino, Italy
Interests: nonlinear circuits and systems; locally coupled nonlinear/nanoscale networks; memristor nanotechnology

Special Issue Information

Dear Colleagues,

In recent decades, systems based on artificial neural networks and machine learning devices have become increasingly present in everyday life. Artificial intelligence is considered one of the most useful tools in data analysis and decision making. The fields of application involve all sectors of our life, including medicine, engineering, economy, manufacturing, and so on. In this context, the importance of research based on artificial neural network systems is evident, and for these reasons, some disciplines related to this topic are rapidly growing in terms of project financing and research scope. As a result, part of the scientific community is devoted to investigating learning machines and artificial neural network systems from a point of view of their application and theory, which is fundamental in order to provide validation for mathematical models.

The main goal of this Special Issue is to collect papers regarding state-of-the-art and the latest studies on neural networks and learning systems. Moreover, it is an opportunity to provide a place where researchers can share and exchange their views on this topic in the fields of theory, design, and applications. The area of interest is wide and includes several categories such as stability and convergence analysis, learning algorithms, artificial vision, and speech recognition.

Prof. Dr. Luca Pancioni
Dr. Giacomo Innocenti
Prof. Dr. Fernando Corinto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neural networks
  • neurons
  • stability
  • circuit theory
  • nonlinear systems
  • synchronization
  • network topology
  • couplings
  • convergence

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 410 KiB  
Article
Implementation of the Hindmarsh–Rose Model Using Stochastic Computing
by Oscar Camps, Stavros G. Stavrinides, Carol de Benito and Rodrigo Picos
Mathematics 2022, 10(23), 4628; https://doi.org/10.3390/math10234628 - 06 Dec 2022
Cited by 1 | Viewed by 1435
Abstract
The Hindmarsh–Rose model is one of the most used models to reproduce spiking behaviour in biological neurons. However, since it is defined as a system of three coupled differential equations, its implementation can be burdensome and impractical for a large number of elements. [...] Read more.
The Hindmarsh–Rose model is one of the most used models to reproduce spiking behaviour in biological neurons. However, since it is defined as a system of three coupled differential equations, its implementation can be burdensome and impractical for a large number of elements. In this paper, we present a successful implementation of this model within a stochastic computing environment. The merits of the proposed approach are design simplicity, due to stochastic computing, and the ease of implementation. Simulation results demonstrated that the approximation achieved is equivalent to introducing a noise source into the original model, in order to reproduce the actual observed behaviour of the biological systems. A study for the level of noise introduced, according to the number of bits in the stochastic sequence, has been performed. Additionally, we demonstrate that such an approach, even though it is noisy, reproduces the behaviour of biological systems, which are intrinsically noisy. It is also demonstrated that using some 18–19 bits are enough to provide a speedup of x2 compared to biological systems, with a very small number of gates, thus paving the road for the in silico implementation of large neuron networks. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

19 pages, 4123 KiB  
Article
Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series
by Astrid Maritza González-Zapata, Esteban Tlelo-Cuautle, Brisbane Ovilla-Martinez, Israel Cruz-Vega and Luis Gerardo De la Fraga
Mathematics 2022, 10(20), 3886; https://doi.org/10.3390/math10203886 - 19 Oct 2022
Cited by 6 | Viewed by 1794
Abstract
Reservoir computing has shown promising results in predicting chaotic time series. However, the main challenges of time-series predictions are associated with reducing computational costs and increasing the prediction horizon. In this sense, we propose the optimization of Echo State Networks (ESN), where the [...] Read more.
Reservoir computing has shown promising results in predicting chaotic time series. However, the main challenges of time-series predictions are associated with reducing computational costs and increasing the prediction horizon. In this sense, we propose the optimization of Echo State Networks (ESN), where the main goal is to increase the prediction horizon using a lower count number of neurons compared with state-of-the-art models. In addition, we show that the application of the decimation technique allows us to emulate an increase in the prediction of up to 10,000 steps ahead. The optimization is performed by applying particle swarm optimization and considering two chaotic systems as case studies, namely the chaotic Hindmarsh–Rose neuron with slow dynamic behavior and the well-known Lorenz system. The results show that although similar works used from 200 to 5000 neurons in the reservoir of the ESN to predict from 120 to 700 steps ahead, our optimized ESN including decimation used 100 neurons in the reservoir, with a capability of predicting up to 10,000 steps ahead. The main conclusion is that we ensured larger prediction horizons compared to recent works, achieving an improvement of more than one order of magnitude, and the computational costs were greatly reduced. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

7 pages, 738 KiB  
Communication
A Neural Probabilistic Graphical Model for Learning and Decision Making in Evolving Structured Environments
by Edmondo Trentin
Mathematics 2022, 10(15), 2646; https://doi.org/10.3390/math10152646 - 28 Jul 2022
Viewed by 918
Abstract
A difficult and open problem in artificial intelligence is the development of agents that can operate in complex environments which change over time. The present communication introduces the formal notions, the architecture, and the training algorithm of a machine capable of learning and [...] Read more.
A difficult and open problem in artificial intelligence is the development of agents that can operate in complex environments which change over time. The present communication introduces the formal notions, the architecture, and the training algorithm of a machine capable of learning and decision-making in evolving structured environments. These environments are defined as sets of evolving relations among evolving entities. The proposed machine relies on a probabilistic graphical model whose time-dependent latent variables undergo a Markov assumption. The likelihood of such variables given the structured environment is estimated via a probabilistic variant of the recursive neural network. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
20 pages, 553 KiB  
Article
Convergence of a Class of Delayed Neural Networks with Real Memristor Devices
by Mauro Di Marco, Mauro Forti, Riccardo Moretti, Luca Pancioni, Giacomo Innocenti and Alberto Tesi
Mathematics 2022, 10(14), 2439; https://doi.org/10.3390/math10142439 - 13 Jul 2022
Cited by 2 | Viewed by 1251
Abstract
Neural networks with memristors are promising candidates to overcome the limitations of traditional von Neumann machines via the implementation of novel analog and parallel computation schemes based on the in-memory computing principle. Of special importance are neural networks with generic or extended memristor [...] Read more.
Neural networks with memristors are promising candidates to overcome the limitations of traditional von Neumann machines via the implementation of novel analog and parallel computation schemes based on the in-memory computing principle. Of special importance are neural networks with generic or extended memristor models that are suited to accurately describe real memristor devices. The manuscript considers a general class of delayed neural networks where the memristors obey the relevant and widely used generic memristor model, the voltage threshold adaptive memristor (VTEAM) model. Due to physical limitations, the memristor state variables evolve in a closed compact subset of the space; therefore, the network can be mathematically described by a special class of differential inclusions named differential variational inequalities (DVIs). By using the theory of DVI, and the Lyapunov approach, the paper proves some fundamental results on convergence of solutions toward equilibrium points, a dynamic property that is extremely useful in neural network applications to content addressable memories and signal-processing in real time. The conditions for convergence, which hold in the general nonsymmetric case and for any constant delay, are given in the form of a linear matrix inequality (LMI) and can be readily checked numerically. To the authors knowledge, the obtained results are the only ones available in the literature on the convergence of neural networks with real generic memristors. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

12 pages, 421 KiB  
Article
End-to-End Training of Deep Neural Networks in the Fourier Domain
by András Fülöp and András Horváth
Mathematics 2022, 10(12), 2132; https://doi.org/10.3390/math10122132 - 19 Jun 2022
Cited by 1 | Viewed by 1972
Abstract
Convolutional networks are commonly used in various machine learning tasks, and they are more and more popularly used in the embedded domain with devices such as smart cameras and mobile phones. The operation of convolution can be substituted by point-wise multiplication in the [...] Read more.
Convolutional networks are commonly used in various machine learning tasks, and they are more and more popularly used in the embedded domain with devices such as smart cameras and mobile phones. The operation of convolution can be substituted by point-wise multiplication in the Fourier domain, which can save operation, but usually, it is applied with a Fourier transform before and an inverse Fourier transform after the multiplication, since other operations in neural networks cannot be implemented efficiently in the Fourier domain. In this paper, we will present a method for implementing neural network completely in the Fourier domain, and by this, saving multiplications and the operations of inverse Fourier transformations. Our method can decrease the number of operations by four times the number of pixels in the convolutional kernel with only a minor decrease in accuracy, for example, 4% on the MNIST and 2% on the HADB datasets. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

10 pages, 3064 KiB  
Article
Deep Learning Approaches for the Segmentation of Glomeruli in Kidney Histopathological Images
by Giovanna Maria Dimitri, Paolo Andreini, Simone Bonechi, Monica Bianchini, Alessandro Mecocci, Franco Scarselli, Alberto Zacchi, Guido Garosi, Thomas Marcuzzo and Sergio Antonio Tripodi
Mathematics 2022, 10(11), 1934; https://doi.org/10.3390/math10111934 - 05 Jun 2022
Cited by 5 | Viewed by 1991
Abstract
Deep learning is widely applied in bioinformatics and biomedical imaging, due to its ability to perform various clinical tasks automatically and accurately. In particular, the application of deep learning techniques for the automatic identification of glomeruli in histopathological kidney images can play a [...] Read more.
Deep learning is widely applied in bioinformatics and biomedical imaging, due to its ability to perform various clinical tasks automatically and accurately. In particular, the application of deep learning techniques for the automatic identification of glomeruli in histopathological kidney images can play a fundamental role, offering a valid decision support system tool for the automatic evaluation of the Karpinski metric. This will help clinicians in detecting the presence of sclerotic glomeruli in order to decide whether the kidney is transplantable or not. In this work, we implemented a deep learning framework to identify and segment sclerotic and non-sclerotic glomeruli from scanned Whole Slide Images (WSIs) of human kidney biopsies. The experiments were conducted on a new dataset collected by both the Siena and Trieste hospitals. The images were segmented using the DeepLab V2 model, with a pre-trained ResNet101 encoder, applied to 512 × 512 patches extracted from the original WSIs. The results obtained are promising and show a good performance in the segmentation task and a good generalization capacity, despite the different coloring and typology of the histopathological images. Moreover, we present a novel use of the CD10 staining procedure, which gives promising results when applied to the segmentation of sclerotic glomeruli in kidney tissues. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

15 pages, 10771 KiB  
Article
Wave Loss: A Topographic Metric for Image Segmentation
by Ákos Kovács, Jalal Al-Afandi, Csaba Botos and András Horváth
Mathematics 2022, 10(11), 1932; https://doi.org/10.3390/math10111932 - 04 Jun 2022
Viewed by 1739
Abstract
The solution of segmentation problems with deep neural networks requires a well-defined loss function for comparison and network training. In most network training approaches, only area-based differences that are of differing pixel matter are considered; the distribution is not. Our brain can compare [...] Read more.
The solution of segmentation problems with deep neural networks requires a well-defined loss function for comparison and network training. In most network training approaches, only area-based differences that are of differing pixel matter are considered; the distribution is not. Our brain can compare complex objects with ease and considers both pixel level and topological differences simultaneously and comparison between objects requires a properly defined metric that determines similarity between them considering changes both in shape and values. In past years, topographic aspects were incorporated in loss functions where either boundary pixels or the ratio of the areas were employed in difference calculation. In this paper we will show how the application of a topographic metric, called wave loss, can be applied in neural network training and increase the accuracy of traditional segmentation algorithms. Our method has increased segmentation accuracy by 3% on both the Cityscapes and Ms-Coco datasets, using various network architectures. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

25 pages, 764 KiB  
Article
A Comprehensive Comparison of the Performance of Metaheuristic Algorithms in Neural Network Training for Nonlinear System Identification
by Ebubekir Kaya
Mathematics 2022, 10(9), 1611; https://doi.org/10.3390/math10091611 - 09 May 2022
Cited by 8 | Viewed by 1919
Abstract
Many problems in daily life exhibit nonlinear behavior. Therefore, it is important to solve nonlinear problems. These problems are complex and difficult due to their nonlinear nature. It is seen in the literature that different artificial intelligence techniques are used to solve these [...] Read more.
Many problems in daily life exhibit nonlinear behavior. Therefore, it is important to solve nonlinear problems. These problems are complex and difficult due to their nonlinear nature. It is seen in the literature that different artificial intelligence techniques are used to solve these problems. One of the most important of these techniques is artificial neural networks. Obtaining successful results with an artificial neural network depends on its training process. In other words, it should be trained with a good training algorithm. Especially, metaheuristic algorithms are frequently used in artificial neural network training due to their advantages. In this study, for the first time, the performance of sixteen metaheuristic algorithms in artificial neural network training for the identification of nonlinear systems is analyzed. It is aimed to determine the most effective metaheuristic neural network training algorithms. The metaheuristic algorithms are examined in terms of solution quality and convergence speed. In the applications, six nonlinear systems are used. The mean-squared error (MSE) is utilized as the error metric. The best mean training error values obtained for six nonlinear systems were 3.5×104, 4.7×104, 5.6×105, 4.8×104, 5.2×104, and 2.4×103, respectively. In addition, the best mean test error values found for all systems were successful. When the results were examined, it was observed that biogeography-based optimization, moth–flame optimization, the artificial bee colony algorithm, teaching–learning-based optimization, and the multi-verse optimizer were generally more effective than other metaheuristic algorithms in the identification of nonlinear systems. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

Back to TopTop