Machine Learning Theory and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 25 June 2024 | Viewed by 1346

Special Issue Editors


E-Mail Website
Guest Editor
Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia, 41125 Modena, Italy
Interests: machine learning; database systems

E-Mail Website
Guest Editor
Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia, 41125 Modena, Italy
Interests: machine learning; deep learning; axion-like particles; cosmological inflation

Special Issue Information

Dear Colleagues,

This Special Issue entitled "Machine Learning Theory and Applications" aims to bring together cutting-edge research, exciting perspectives, and innovative applications that probe the boundaries of machine learning theory and its applications.

As machine learning algorithms aim to revolutionize various fields, for example, healthcare, finance, and transportation, it is crucial to delve into their theoretical foundations while exploring their real-world applications. This Special Issue aims to promote intellectual exchange and highlight the latest advances and limitations in the theoretical and application aspects of machine learning. We invite researchers, academics, and experts in the field to submit original research articles concerning the following topics.

Dr. Federica Mandreoli
Dr. Veronica Guidetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • theoretical advances in machine learning algorithms
  • deep-learning theory
  • new approaches to deep learning: architectures, optimizers, and training pipelines
  • statistical learning theory and its implications
  • explainable artificial intelligence and interpretability in machine learning
  • machine learning for natural language processing and understanding
  • reinforcement learning and its applications
  • continual learning methods and model recalibration
  • learning by transfer and domain adaptation
  • scalable and efficient machine learning algorithms
  • machine learning applications in healthcare, finance, robotics, and other fields
  • robustness and fairness in machine learning

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 991 KiB  
Article
GreenNAS: A Green Approach to the Hyperparameters Tuning in Deep Learning
by Giorgia Franchini
Mathematics 2024, 12(6), 850; https://doi.org/10.3390/math12060850 - 14 Mar 2024
Viewed by 451
Abstract
This paper discusses the challenges of the hyperparameter tuning in deep learning models and proposes a green approach to the neural architecture search process that minimizes its environmental impact. The traditional approach of neural architecture search involves sweeping the entire space of possible [...] Read more.
This paper discusses the challenges of the hyperparameter tuning in deep learning models and proposes a green approach to the neural architecture search process that minimizes its environmental impact. The traditional approach of neural architecture search involves sweeping the entire space of possible architectures, which is computationally expensive and time-consuming. Recently, to address this issue, performance predictors have been proposed to estimate the performance of different architectures, thereby reducing the search space and speeding up the exploration process. The proposed approach aims to develop a performance predictor by training only a small percentage of the possible hyperparameter configurations. The suggested predictor can be queried to find the best configurations without training them on the dataset. Numerical examples of image denoising and classification enable us to evaluate the performance of the proposed approach in terms of performance and time complexity. Full article
(This article belongs to the Special Issue Machine Learning Theory and Applications)
Show Figures

Figure 1

33 pages, 2077 KiB  
Article
Handling Overlapping Asymmetric Data Sets—A Twice Penalized P-Spline Approach
by Matthew McTeer, Robin Henderson, Quentin M. Anstee and Paolo Missier
Mathematics 2024, 12(5), 777; https://doi.org/10.3390/math12050777 - 05 Mar 2024
Viewed by 590
Abstract
Aims: Overlapping asymmetric data sets are where a large cohort of observations have a small amount of information recorded, and within this group there exists a smaller cohort which have extensive further information available. Missing imputation is unwise if cohort size differs substantially; [...] Read more.
Aims: Overlapping asymmetric data sets are where a large cohort of observations have a small amount of information recorded, and within this group there exists a smaller cohort which have extensive further information available. Missing imputation is unwise if cohort size differs substantially; therefore, we aim to develop a way of modelling the smaller cohort whilst considering the larger. Methods: Through considering traditionally once penalized P-Spline approximations, we create a second penalty term through observing discrepancies in the marginal value of covariates that exist in both cohorts. Our now twice penalized P-Spline is designed to firstly prevent over/under-fitting of the smaller cohort and secondly to consider the larger cohort. Results: Through a series of data simulations, penalty parameter tunings, and model adaptations, our twice penalized model offers up to a 58% and 46% improvement in model fit upon a continuous and binary response, respectively, against existing B-Spline and once penalized P-Spline methods. Applying our model to an individual’s risk of developing steatohepatitis, we report an over 65% improvement over existing methods. Conclusions: We propose a twice penalized P-Spline method which can vastly improve the model fit of overlapping asymmetric data sets upon a common predictive endpoint, without the need for missing data imputation. Full article
(This article belongs to the Special Issue Machine Learning Theory and Applications)
Show Figures

Figure 1

Back to TopTop