AutoML: Advances and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 September 2024 | Viewed by 415

Special Issue Editors


E-Mail Website
Guest Editor
1. Amazon Web Services, 10550 NE 10th St., Bellevue, WA 98004, USA
2. Department of Electrical Engineering and Computer Sciences, UC Berkeley, Berkeley, CA 94720, USA
Interests: machine learning; AutoML; multimedia computing; signal processing

E-Mail Website
Guest Editor
Amazon Web Services, 10550 NE 10th St, Bellevue, WA 98004, USA
Interests: AutoML; anomaly detection; data mining; time series; tabular data; deep learning

Special Issue Information

Dear Colleagues,

The proliferation of machine learning in a myriad of domains has allowed researchers and practioners to harness its potential in scientific research beyond the immediate purview of computer science, such as drug interaction prediction, traffic pattern prediction, image recognition and natural language processing in medical or legal texts.

Applications of ML, however, come with the tasks of (i) formulating abstract problem in terms of ML, (ii) obtaining knowledge to select the right ML model with the correct hperparameters, (iii) preparing the data, (iv) training the models, and (v) extracting the output. Automated machine learning or AutoML lowers the barrier to utilizing ML for research purposes by a wider audience [1], eliminating the need to have AI/ML experts to  use advanced ML or deep learning models.

AutoML provides methods and processes to make ML available for non-ML experts, to improve the efficiency and efficacy of ML and to democratize ML. The AutoML literature includes, but is not limited to, the folowing broad topics: (i) hyperparameter optimization, (ii) neural architecture searching, (iii) ML model selection, (iv) transfer learning, (v) data preprocessing, and (vi) postprocess ML models.

There are openly available packages and services that allow for AutoML to be used such as AutoGluon, AutoSklearn and AutoWEKA.

  1. Hutter, Frank, Lars Kotthoff, and Joaquin Vanschoren. Automated machine learning: methods, systems, challenges. Springer Nature, 2019.
  2. He, X., Zhao, K., & Chu, X. (2021). AutoML: A survey of the state-of-the-art. Knowledge-Based Systems212, 106622.

Dr. Gerald Friedland
Dr. Debanjan Datta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AutoML
  • transfer learning
  • hyperparameter optimization
  • applications of AutoML
  • neural architecture search
  • feature engineering
  • hyperparamter tuning
  • ML model selection
  • data preprocessing
  • MLOps
  • explaianable AI (XAI)
  • responsible AI
  • ML model deployment
  • AI for business
  • no-code machine learning

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2392 KiB  
Article
Integrating Machine Learning and MLOps for Wind Energy Forecasting: A Comparative Analysis and Optimization Study on Türkiye’s Wind Data
by Saadin Oyucu and Ahmet Aksöz
Appl. Sci. 2024, 14(9), 3725; https://doi.org/10.3390/app14093725 (registering DOI) - 27 Apr 2024
Viewed by 167
Abstract
This study conducted a detailed comparative analysis of various machine learning models to enhance wind energy forecasts, including linear regression, decision tree, random forest, gradient boosting machine, XGBoost, LightGBM, and CatBoost. Furthermore, it developed an end-to-end MLOps pipeline leveraging SCADA data from a [...] Read more.
This study conducted a detailed comparative analysis of various machine learning models to enhance wind energy forecasts, including linear regression, decision tree, random forest, gradient boosting machine, XGBoost, LightGBM, and CatBoost. Furthermore, it developed an end-to-end MLOps pipeline leveraging SCADA data from a wind turbine in Türkiye. This research not only compared models using the RMSE metric for selection and optimization but also explored in detail the impact of integrating machine learning with MLOps on the precision of energy production forecasts. It investigated the suitability and efficiency of ML models in predicting wind energy with MLOps integration. The study explored ways to improve LightGBM algorithm performance through hyperparameter tuning and Docker utilization. It also highlighted challenges in speeding up MLOps development and deployment processes. Model performance was assessed using the RMSE metric, conducting a comparative evaluation across different models. The findings revealed that the RMSE values among the regression models ranged from 460 kW to 192 kW. Focusing on enhancing LightGBM, the research decreased the RMSE value to 190.34 kW. Despite facing technical and operational hurdles, the implementation of MLOps was proven to enhance the speed (latency of 9 ms), reliability (through Docker encapsulation), and scalability (using Docker swarm) of machine learning endeavors. Full article
(This article belongs to the Special Issue AutoML: Advances and Applications)
Back to TopTop