Special Issue "Advances in Numerical Optimization Methods for Machine-Learning"

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: closed (10 May 2022) | Viewed by 3775

Special Issue Editor

Department of Industrial Engineering, University of Florence, Viale Morgagni 40/44, 50134 Firenze, Italy
Interests: numerical optimization; iterative methods for linear algebra; nonlinear inverse problems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Optimization is at the heart of statistical and machine learning models. The recent increasing interest in numerical optimization methods for  problems arising in big data and machine learning applications comes with many significant challenges. The features of good optimization algorithms from the machine learning and optimization perspectives can be quite different and improvements of classical optimization methods, as well as new procedures, are needed to cope with the problems that are highly nonlinear, nonconvex and high dimensional. Moreover, inherent noise in objective function and derivatives evaluations call for stochastic methods and adaptive strategies aimed at avoiding parameter tuning are also of interest.

The focus of this Special Issue is to highlight present advances on the interplay between numerical optimization methods and machine learning both from a  theoretical and practical point of view.

Prof. Dr. Stefania Bellavia
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Numerical optimization
  • Machine learning
  • First and second order methods
  • Noise-reduction procedures
  • Complexity analysis

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
On the Convergence Properties of a Stochastic Trust-Region Method with Inexact Restoration
Axioms 2023, 12(1), 38; https://doi.org/10.3390/axioms12010038 - 28 Dec 2022
Cited by 1 | Viewed by 926
Abstract
We study the convergence properties of SIRTR, a stochastic inexact restoration trust-region method suited for the minimization of a finite sum of continuously differentiable functions. This method combines the trust-region methodology with random function and gradient estimates formed by subsampling. Unlike other existing [...] Read more.
We study the convergence properties of SIRTR, a stochastic inexact restoration trust-region method suited for the minimization of a finite sum of continuously differentiable functions. This method combines the trust-region methodology with random function and gradient estimates formed by subsampling. Unlike other existing schemes, it forces the decrease of a merit function by combining the function approximation with an infeasibility term, the latter of which measures the distance of the current sample size from its maximum value. In a previous work, the expected iteration complexity to satisfy an approximate first-order optimality condition was given. Here, we elaborate on the convergence analysis of SIRTR and prove its convergence in probability under suitable accuracy requirements on random function and gradient estimates. Furthermore, we report the numerical results obtained on some nonconvex classification test problems, discussing the impact of the probabilistic requirements on the selection of the sample sizes. Full article
(This article belongs to the Special Issue Advances in Numerical Optimization Methods for Machine-Learning)
Show Figures

Figure 1

Article
Balanced Medical Image Classification with Transfer Learning and Convolutional Neural Networks
Axioms 2022, 11(3), 115; https://doi.org/10.3390/axioms11030115 - 07 Mar 2022
Cited by 2 | Viewed by 1841
Abstract
This paper aims to propose a tool for image classification in medical diagnosis decision support, in a context where computational power is limited and then specific, high-speed computing infrastructures cannot be used (mainly for economic and energy consuming reasons). The proposed method combines [...] Read more.
This paper aims to propose a tool for image classification in medical diagnosis decision support, in a context where computational power is limited and then specific, high-speed computing infrastructures cannot be used (mainly for economic and energy consuming reasons). The proposed method combines a deep neural networks algorithm with medical imaging procedures and is implemented to allow an efficient use on affordable hardware. The convolutional neural network (CNN) procedure used VGG16 as its base architecture, using the transfer learning technique with the parameters obtained in the ImageNet competition. Two convolutional blocks and one dense block were added to this architecture. The tool was developed and calibrated on the basis of five common lung diseases using 5430 images from two public datasets and the transfer learning technique. The holdout ratios of 90% and 10% for training and testing, respectively, were obtained, and the regularization tools were dropout, early stopping, and Lasso regularization (L2). An accuracy (ACC) of 56% and an area under the receiver-operating characteristic curve (ROC—AUC) of 50% were reached in testing, which are suitable for decision support in a resource-constrained environment. Full article
(This article belongs to the Special Issue Advances in Numerical Optimization Methods for Machine-Learning)
Show Figures

Figure 1

Back to TopTop