Advances in Mathematics for Applied Machine Learning

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 3815

Special Issue Editors


E-Mail Website
Guest Editor
School of Chemical Engineering, Pontificia Universidad Católica de Valparaíso, Valparíso 2340000, Chile
Interests: machine learning; evolutionary computation; deep learning; machine vision; modeling and simulation; uncertainty quantification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso 2340000, Chile
Interests: machine learning; evolutionary computing; network science

Special Issue Information

Dear Colleagues,

The interest in machine learning (ML) has exploded over the past decade due to its successful implementation in practical problems. As a result, the application of ML techniques is expansive in different research fields, and mathematics is no exception. ML techniques have been integrated with areas such as graph theory, differential equations, fuzzy logic, and mathematical optimization, just to name a few. This intense interest in fostering the development of ML techniques via advances in mathematics is expected to help clarify the benefits and limitations of ML techniques, as well as bring machine learning and its variants to the next threshold.

With these antecedents, we have developed a Special Issue dedicated to advances in mathematics for applied machine learning, including employing rigorous mathematical techniques for developing stable and robust learning algorithms. Specific attention will be given to recently developed ML techniques such as deep learning and boosted tree models. We hope manuscripts report mathematical analysis, substantive results, critical comparisons, interpretation of results, and uncertainty quantification. The Special Issue welcomes review articles, regular articles, and short communications. Articles that may interest a general audience are preferred over a more specific or reduced audience, and therefore, multidisciplinary, interdisciplinary, and cross-disciplinary studies are welcome.

Prof. Dr. Freddy A. Lucay
Prof. Dr. Wenceslao Palma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • learning algorithm
  • machine vision
  • optimization methods for deep learning
  • novel deep learning architectures
  • neural differential equations
  • neural network for differential equations
  • machine learning in network science
  • fuzzy machine learning
  • any advance in mathematical for machine learning

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 356 KiB  
Article
Multi-Task Deep Learning Games: Investigating Nash Equilibria and Convergence Properties
by Minhyeok Lee
Axioms 2023, 12(6), 569; https://doi.org/10.3390/axioms12060569 - 8 Jun 2023
Cited by 2 | Viewed by 1071
Abstract
This paper conducts a rigorous game-theoretic analysis on multi-task deep learning, providing mathematical insights into the dynamics and interactions of tasks within these models. Multi-task deep learning has attracted significant attention in recent years due to its ability to leverage shared representations across [...] Read more.
This paper conducts a rigorous game-theoretic analysis on multi-task deep learning, providing mathematical insights into the dynamics and interactions of tasks within these models. Multi-task deep learning has attracted significant attention in recent years due to its ability to leverage shared representations across multiple correlated tasks, leading to improved generalization and reduced training time. However, understanding and examining the interactions between tasks within a multi-task deep learning system poses a considerable challenge. In this paper, we present a game-theoretic investigation of multi-task deep learning, focusing on the existence and convergence of Nash equilibria. Game theory provides a suitable framework for modeling the interactions among various tasks in a multi-task deep learning system, as it captures the strategic behavior of learning agents sharing a common set of parameters. Our primary contributions include: casting the multi-task deep learning problem as a game where each task acts as a player aiming to minimize its task-specific loss function; introducing the notion of a Nash equilibrium for the multi-task deep learning game; demonstrating the existence of at least one Nash equilibrium under specific convexity and Lipschitz continuity assumptions for the loss functions; examining the convergence characteristics of the Nash equilibrium; and providing a comprehensive analysis of the implications and limitations of our theoretical findings. We also discuss potential extensions and directions for future research in the multi-task deep learning landscape. Full article
(This article belongs to the Special Issue Advances in Mathematics for Applied Machine Learning)
31 pages, 5902 KiB  
Article
Metaheuristic-Based Hyperparameter Tuning for Recurrent Deep Learning: Application to the Prediction of Solar Energy Generation
by Catalin Stoean, Miodrag Zivkovic, Aleksandra Bozovic, Nebojsa Bacanin, Roma Strulak-Wójcikiewicz, Milos Antonijevic and Ruxandra Stoean
Axioms 2023, 12(3), 266; https://doi.org/10.3390/axioms12030266 - 4 Mar 2023
Cited by 23 | Viewed by 2164
Abstract
As solar energy generation has become more and more important for the economies of numerous countries in the last couple of decades, it is highly important to build accurate models for forecasting the amount of green energy that will be produced. Numerous recurrent [...] Read more.
As solar energy generation has become more and more important for the economies of numerous countries in the last couple of decades, it is highly important to build accurate models for forecasting the amount of green energy that will be produced. Numerous recurrent deep learning approaches, mainly based on long short-term memory (LSTM), are proposed for dealing with such problems, but the most accurate models may differ from one test case to another with respect to architecture and hyperparameters. In the current study, the use of an LSTM and a bidirectional LSTM (BiLSTM) is proposed for dealing with a data collection that, besides the time series values denoting the solar energy generation, also comprises corresponding information about the weather. The proposed research additionally endows the models with hyperparameter tuning by means of an enhanced version of a recently proposed metaheuristic, the reptile search algorithm (RSA). The output of the proposed tuned recurrent neural network models is compared to the ones of several other state-of-the-art metaheuristic optimization approaches that are applied for the same task, using the same experimental setup, and the obtained results indicate the proposed approach as the better alternative. Moreover, the best recurrent model achieved the best results with R2 of 0.604, and a normalized MSE value of 0.014, which yields an improvement of around 13% over traditional machine learning models. Full article
(This article belongs to the Special Issue Advances in Mathematics for Applied Machine Learning)
Show Figures

Figure 1

Back to TopTop