Artificial Intelligence and Mathematical Methods

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 9442

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electronic Engineering, University of Nis, 18106 Nis, Serbia
Interests: digital telecommunication; quantization; compression; machine learning; coding
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Electronic Engineering, University of Nis, Nis, Serbia
Interests: quantization; compression; coding; speech processing; deep learning; approximation theory

E-Mail Website
Guest Editor
Faculty of Sciences and Mathematics, Department of Computer Science, University of Nis, Nis, Serbia
Interests: generalized inverses of matrices; numerical linear algebra; dynamic and stochastic stability of mechanical systems; information theory and coding; hankel determinants and integer sequences; soliton theory

Special Issue Information

Dear Colleagues,

The goal of this Special Issue is to encourage researchers dealing with problems in the field of artificial intelligence (AI) to present their latest results. Although great progress has been achieved in this field, modern trends dictate increasing demands on the application of advanced methods of AI (for DNN), especially for resource-constrained devices (with memory, energy, and computational constraints). In order to solve these challenging problems, it is necessary not only to consider various mathematical methods of optimization and approximation, but also to consider solving these problems from different engineering points of view. One way to solve these problems is to thoughtfully apply DNN compression techniques, such as pruning and quantization techniques. The transfer of knowledge from the field of mathematical methods of optimization and approximation to the field of AI is very important for current and future effective determining solutions to the identified problems. With the effective application of engineering logic and studious analysis of the problems, the benefits in solving these problems can be even more significant. In brief, this Special Issue welcomes research in the field of artificial intelligence, mathematical methods of optimization and approximation, as well as joint harmonization of solutions to cutting-edge engineering problems.

Potential topics include but are not limited to the following:

  • Artificial intelligence (AI) in engineering and science;
  • Deep learning methods;
  • Machine learning algorithms;
  • Mathematical methods of optimization (coast function, etc.);
  • Deep neural network (DNN) architecture and feasible applications;
  • Methods for solving linear and nonlinear equations with possible correlation with DNNs;
  • Artificial intelligence for solving prediction and classification problems;
  • Pruning technique for DNN compression;
  • Quantization techniques for neural network compression;
  • Fixed and variable length coding of DNN parameters;
  • Artificial intelligence for data and signal processing;
  • AI algorithms for applications on resource-constrained devices (memory, energy, and computational constraints);
  • Approximate methods for applications in engineering problems and AI.

Prof. Dr. Zoran H. Perić
Dr. Jelena Nikolić
Prof. Dr. Marko Petković
Prof. Dr. Vlado Delić
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 579 KiB  
Article
Reducing the Dimensionality of SPD Matrices with Neural Networks in BCI
by Zhen Peng, Hongyi Li, Di Zhao and Chengwei Pan
Mathematics 2023, 11(7), 1570; https://doi.org/10.3390/math11071570 - 23 Mar 2023
Cited by 1 | Viewed by 1472
Abstract
In brain–computer interface (BCI)-based motor imagery, the symmetric positive definite (SPD) covariance matrices of electroencephalogram (EEG) signals with discriminative information features lie on a Riemannian manifold, which is currently attracting increasing attention. Under a Riemannian manifold perspective, we propose a non-linear dimensionality reduction [...] Read more.
In brain–computer interface (BCI)-based motor imagery, the symmetric positive definite (SPD) covariance matrices of electroencephalogram (EEG) signals with discriminative information features lie on a Riemannian manifold, which is currently attracting increasing attention. Under a Riemannian manifold perspective, we propose a non-linear dimensionality reduction algorithm based on neural networks to construct a more discriminative low-dimensional SPD manifold. To this end, we design a novel non-linear shrinkage layer to modify the extreme eigenvalues of the SPD matrix properly, then combine the traditional bilinear mapping to non-linearly reduce the dimensionality of SPD matrices from manifold to manifold. Further, we build the SPD manifold network on a Siamese architecture which can learn the similarity metric from the data. Subsequently, the effective signal classification method named minimum distance to Riemannian mean (MDRM) can be implemented directly on the low-dimensional manifold. Finally, a regularization layer is proposed to perform subject-to-subject transfer by exploiting the geometric relationships of multi-subject. Numerical experiments for synthetic data and EEG signal datasets indicate the effectiveness of the proposed manifold network. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

21 pages, 6727 KiB  
Article
Feature Map Regularized CycleGAN for Domain Transfer
by Lidija Krstanović, Branislav Popović, Marko Janev and Branko Brkljač
Mathematics 2023, 11(2), 372; https://doi.org/10.3390/math11020372 - 10 Jan 2023
Cited by 3 | Viewed by 1370
Abstract
CycleGAN domain transfer architectures use cycle consistency loss mechanisms to enforce the bijectivity of highly underconstrained domain transfer mapping. In this paper, in order to further constrain the mapping problem and reinforce the cycle consistency between two domains, we also introduce a novel [...] Read more.
CycleGAN domain transfer architectures use cycle consistency loss mechanisms to enforce the bijectivity of highly underconstrained domain transfer mapping. In this paper, in order to further constrain the mapping problem and reinforce the cycle consistency between two domains, we also introduce a novel regularization method based on the alignment of feature maps probability distributions. This type of optimization constraint, expressed via an additional loss function, allows for further reducing the size of the regions that are mapped from the source domain into the same image in the target domain, which leads to mapping closer to the bijective and thus better performance. By selecting feature maps of the network layers with the same depth d in the encoder of the direct generative adversarial networks (GANs), and the decoder of the inverse GAN, it is possible to describe their d-dimensional probability distributions and, through novel regularization term, enforce similarity between representations of the same image in both domains during the mapping cycle. We introduce several ground distances between Gaussian distributions of the corresponding feature maps used in the regularization. In the experiments conducted on several real datasets, we achieved better performance in the unsupervised image transfer task in comparison to the baseline CycleGAN, and obtained results that were much closer to the fully supervised pix2pix method for all used datasets. The PSNR measure of the proposed method was, on average, 4.7% closer to the results of the pix2pix method in comparison to the baseline CycleGAN over all datasets. This also held for SSIM, where the described percentage was 8.3% on average over all datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

22 pages, 905 KiB  
Article
Measure of Similarity between GMMs Based on Geometry-Aware Dimensionality Reduction
by Branislav Popović, Marko Janev, Lidija Krstanović, Nikola Simić and Vlado Delić
Mathematics 2023, 11(1), 175; https://doi.org/10.3390/math11010175 - 29 Dec 2022
Cited by 3 | Viewed by 1213
Abstract
Gaussian Mixture Models (GMMs) are used in many traditional expert systems and modern artificial intelligence tasks such as automatic speech recognition, image recognition and retrieval, pattern recognition, speaker recognition and verification, financial forecasting applications and others, as simple statistical representations of underlying data. [...] Read more.
Gaussian Mixture Models (GMMs) are used in many traditional expert systems and modern artificial intelligence tasks such as automatic speech recognition, image recognition and retrieval, pattern recognition, speaker recognition and verification, financial forecasting applications and others, as simple statistical representations of underlying data. Those representations typically require many high-dimensional GMM components that consume large computing resources and increase computation time. On the other hand, real-time applications require computationally efficient algorithms and for that reason, various GMM similarity measures and dimensionality reduction techniques have been examined to reduce the computational complexity. In this paper, a novel GMM similarity measure is proposed. The measure is based on a recently presented nonlinear geometry-aware dimensionality reduction algorithm for the manifold of Symmetric Positive Definite (SPD) matrices. The algorithm is applied over SPD representations of the original data. The local neighborhood information from the original high-dimensional parameter space is preserved by preserving distance to the local mean. Instead of dealing with high-dimensional parameter space, the method operates on much lower-dimensional space of transformed parameters. Resolving the distance between such representations is reduced to calculating the distance among lower-dimensional matrices. The method was tested within a texture recognition task where superior state-of-the-art performance in terms of the trade-off between recognition accuracy and computational complexity has been achieved in comparison with all baseline GMM similarity measures. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

18 pages, 3102 KiB  
Article
Optimal Neural Network Model for Short-Term Prediction of Confirmed Cases in the COVID-19 Pandemic
by Miljana Milić, Jelena Milojković and Miljan Jeremić
Mathematics 2022, 10(20), 3804; https://doi.org/10.3390/math10203804 - 15 Oct 2022
Cited by 1 | Viewed by 1242
Abstract
COVID-19 is one of the largest issues that humanity still has to cope with and has an impact on the daily lives of billions of people. Researchers from all around the world have made various attempts to establish accurate mathematical models of COVID-19 [...] Read more.
COVID-19 is one of the largest issues that humanity still has to cope with and has an impact on the daily lives of billions of people. Researchers from all around the world have made various attempts to establish accurate mathematical models of COVID-19 spread. In many branches of science, it is difficult to make accurate predictions about short time series with extremely irregular behavior. Artificial neural networks (ANNs) have lately been extensively used for such applications. Although ANNs may mimic the nonlinear behavior of short time series, they frequently struggle to handle all turbulences. Alternative methods must be used as a result. In order to reduce errors and boost forecasting confidence, a novel methodology that combines Time Delay Neural Networks is suggested in this work. Six separate datasets are used for its validation showing the number of confirmed daily COVID-19 infections in 2021 for six world countries. It is demonstrated that the method may greatly improve the individual networks’ forecasting accuracy independent of their topologies, which broadens the applicability of the approach. A series of additional predictive experiments involving state-of-the-art Extreme Learning Machine modeling were performed to quantitatively compare the accuracy of the proposed methodology with that of similar methodologies. It is shown that the forecasting accuracy of the system outperforms ELM modeling and is in the range of other state-of-the art solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

21 pages, 2308 KiB  
Article
Two Novel Non-Uniform Quantizers with Application in Post-Training Quantization
by Zoran Perić, Danijela Aleksić, Jelena Nikolić and Stefan Tomić
Mathematics 2022, 10(19), 3435; https://doi.org/10.3390/math10193435 - 21 Sep 2022
Viewed by 1465
Abstract
With increased network downsizing and cost minimization in deployment of neural network (NN) models, the utilization of edge computing takes a significant place in modern artificial intelligence today. To bridge the memory constraints of less-capable edge systems, a plethora of quantizer models and [...] Read more.
With increased network downsizing and cost minimization in deployment of neural network (NN) models, the utilization of edge computing takes a significant place in modern artificial intelligence today. To bridge the memory constraints of less-capable edge systems, a plethora of quantizer models and quantization techniques are proposed for NN compression with the goal of enabling the fitting of the quantized NN (QNN) on the edge device and guaranteeing a high extent of accuracy preservation. NN compression by means of post-training quantization has attracted a lot of research attention, where the efficiency of uniform quantizers (UQs) has been promoted and heavily exploited. In this paper, we propose two novel non-uniform quantizers (NUQs) that prudently utilize one of the two properties of the simplest UQ. Although having the same quantization rule for specifying the support region, both NUQs have a different starting setting in terms of cell width, compared to a standard UQ. The first quantizer, named the simplest power-of-two quantizer (SPTQ), defines the width of cells that are multiplied by the power of two. As it is the case in the simplest UQ design, the representation levels of SPTQ are midpoints of the quantization cells. The second quantizer, named the modified SPTQ (MSPTQ), is a more competitive quantizer model, representing an enhanced version of SPTQ in which the quantizer decision thresholds are centered between the nearest representation levels, similar to the UQ design. These properties make the novel NUQs relatively simple. Unlike UQ, the quantization cells of MSPTQ are not of equal widths and the representation levels are not midpoints of the quantization cells. In this paper, we describe the design procedure of SPTQ and MSPTQ and we perform their optimization for the assumed Laplacian source. Afterwards, we perform post-training quantization by implementing SPTQ and MSPTQ, study the viability of QNN accuracy and show the implementation benefits over the case where UQ of an equal number of quantization cells is utilized in QNN for the same classification task. We believe that both NUQs are particularly substantial for memory-constrained environments, where simple and acceptably accurate solutions are of crucial importance. Full article
(This article belongs to the Special Issue Artificial Intelligence and Mathematical Methods)
Show Figures

Figure 1

Back to TopTop