Machine Learning for Computational Science and Engineering

A special issue of Computation (ISSN 2079-3197). This special issue belongs to the section "Computational Engineering".

Deadline for manuscript submissions: closed (31 January 2020) | Viewed by 44342

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Advanced Modeling and Simulation, University of Nicosia, 2417 Nicosia, Cyprus
Interests: computational fluid dynamics; turbulence; shock-waves; multi-component mixing; micro- & nano-scale flows; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mechanical and Aerospace Engineering, University of Strathclyde, 75 Montrose Street, Glasgow G1 1XJ, UK
Interests: nanotechnologies; nanoscience; fluid dynamics; computational science

Special Issue Information

Dear Colleagues,

Machine learning (ML) is an enabling technology that has an impact on many branches of science and engineering.

ML has recently received significant attention from both academic and industrial communities, due to the development of improved algorithms, open-source software frameworks, and the interest in connection with a number of diverse application domains. This Special Issue concerns the development and application of ML algorithms and methods in relation to three broad areas:

  • Computational science and engineering; 
  • Pattern recognition; and
  • Computational design. 

In addition to original research papers, review papers on the state-of-the-art and future perspectives are also invited.

Prof. Dimitris Drikakis
Dr. Michael Frank
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • artificial intelligence
  • artificial neural networks
  • computational science
  • engineering
  • computational design
  • pattern recognition

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

25 pages, 717 KiB  
Article
A Holistic Auto-Configurable Ensemble Machine Learning Strategy for Financial Trading
by Salvatore Carta, Andrea Corriga, Anselmo Ferreira, Diego Reforgiato Recupero and Roberto Saia
Computation 2019, 7(4), 67; https://doi.org/10.3390/computation7040067 - 20 Nov 2019
Cited by 11 | Viewed by 2927
Abstract
Financial markets forecasting represents a challenging task for a series of reasons, such as the irregularity, high fluctuation, noise of the involved data, and the peculiar high unpredictability of the financial domain. Moreover, literature does not offer a proper methodology to systematically identify [...] Read more.
Financial markets forecasting represents a challenging task for a series of reasons, such as the irregularity, high fluctuation, noise of the involved data, and the peculiar high unpredictability of the financial domain. Moreover, literature does not offer a proper methodology to systematically identify intrinsic and hyper-parameters, input features, and base algorithms of a forecasting strategy in order to automatically adapt itself to the chosen market. To tackle these issues, this paper introduces a fully automated optimized ensemble approach, where an optimized feature selection process has been combined with an automatic ensemble machine learning strategy, created by a set of classifiers with intrinsic and hyper-parameters learned in each marked under consideration. A series of experiments performed on different real-world futures markets demonstrate the effectiveness of such an approach with regard to both to the Buy and Hold baseline strategy and to several canonical state-of-the-art solutions. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

11 pages, 3594 KiB  
Article
Machine-Learning Prediction of Underwater Shock Loading on Structures
by Mou Zhang, Dimitris Drikakis, Lei Li and Xiu Yan
Computation 2019, 7(4), 58; https://doi.org/10.3390/computation7040058 - 08 Oct 2019
Cited by 4 | Viewed by 3222
Abstract
Due to the complex physics of underwater explosion problems, it is difficult to derive analytical solutions with accurate results. In this study, a machine-learning method to train a back-propagation neural network for parameter prediction is presented for the first time in literature. The [...] Read more.
Due to the complex physics of underwater explosion problems, it is difficult to derive analytical solutions with accurate results. In this study, a machine-learning method to train a back-propagation neural network for parameter prediction is presented for the first time in literature. The specific problem is the response of a structure submerged in water subjected to shock loads produced by an underwater explosion, with the detonation point being far away from the structure so that the loading wave can be regarded as a planar shock wave. Two rigid parallel plates connected by a linear spring and a linear dashpot that simulate structural stiffness and damping respectively, represent the structure. Taking the Laplace transform of the governing equations, solving the resulting equations, and then taking the inverse Laplace transform, the simplified problem is analyzed theoretically. The coupled ordinary differential equations governing the motion of the system are also solved numerically by the fourth order Runge–Kutta method and then verified by a finite element method using Ansys/LSDYNA. The parametric training with the back-propagation neural network algorithm was conducted to delineate the effects of structural stiffness and damping on the attenuation of shock waves, the cavitation time, and the time of maximum momentum transfer. The prediction results agree well with the validation and test sample results. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

16 pages, 5948 KiB  
Article
Fuzzy C-Means Based Clustering and Rule Formation Approach for Classification of Bearing Faults Using Discrete Wavelet Transform
by Srivani Anbu, Arunkumar Thangavelu and S. Denis Ashok
Computation 2019, 7(4), 54; https://doi.org/10.3390/computation7040054 - 23 Sep 2019
Cited by 10 | Viewed by 2984
Abstract
The rolling bearings are considered as the heart of rotating machinery and early fault diagnosis is one of the biggest challenges during operation. Due to complicated mechanical assemblies, detection of the advancing fault and faults at the incipient stage is very difficult and [...] Read more.
The rolling bearings are considered as the heart of rotating machinery and early fault diagnosis is one of the biggest challenges during operation. Due to complicated mechanical assemblies, detection of the advancing fault and faults at the incipient stage is very difficult and tedious. This work presents a fuzzy rule based classification of bearing faults using Fuzzy C-means clustering method using vibration measurements. Experiments were conducted to collect the vibration signals of a normal bearing and bearings with faults in the inner race, outer race and ball fault. Discrete Wavelet Transform (DWT) technique is used to decompose the vibration signals into different frequency bands. In order to detect the early faults in the bearings, various statistical features were extracted from this decomposed signal of each frequency band. Based on the extracted features, Fuzzy C-means clustering method (FCM) is developed to classify the faults using suitable membership functions and fuzzy rule base is developed for each class of the bearing fault using labeled data. The experimental results show that the proposed method is able to classify the condition of the bearing using the extracted features. The proposed FCM based clustering and classification model provides easier interpretation and implementation for monitoring the condition of the rolling bearings at an early stage and it will be helpful to take the preventive action before a large-scale failure. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

17 pages, 434 KiB  
Article
Enhanced Feature Subset Selection Using Niche Based Bat Algorithm
by Noman Saleem, Kashif Zafar and Alizaa Fatima Sabzwari
Computation 2019, 7(3), 49; https://doi.org/10.3390/computation7030049 - 06 Sep 2019
Cited by 6 | Viewed by 3158
Abstract
Redundant and irrelevant features disturb the accuracy of the classifier. In order to avoid redundancy and irrelevancy problems, feature selection techniques are used. Finding the most relevant feature subset that can enhance the accuracy rate of the classifier is one of the most [...] Read more.
Redundant and irrelevant features disturb the accuracy of the classifier. In order to avoid redundancy and irrelevancy problems, feature selection techniques are used. Finding the most relevant feature subset that can enhance the accuracy rate of the classifier is one of the most challenging parts. This paper presents a new solution to finding relevant feature subsets by the niche based bat algorithm (NBBA). It is compared with existing state of the art approaches, including evolutionary based approaches. The multi-objective bat algorithm (MOBA) selected 8, 16, and 248 features with 93.33%, 93.54%, and 78.33% accuracy on ionosphere, sonar, and Madelon datasets, respectively. The multi-objective genetic algorithm (MOGA) selected 10, 17, and 256 features with 91.28%, 88.70%, and 75.16% accuracy on same datasets, respectively. Finally, the multi-objective particle swarm optimization (MOPSO) selected 9, 21, and 312 with 89.52%, 91.93%, and 76% accuracy on the above datasets, respectively. In comparison, NBBA selected 6, 19, and 178 features with 93.33%, 95.16%, and 80.16% accuracy on the above datasets, respectively. The niche multi-objective genetic algorithm selected 8, 15, and 196 features with 93.33%, 91.93%, and 79.16 % accuracy on the above datasets, respectively. Finally, the niche multi-objective particle swarm optimization selected 9, 19, and 213 features with 91.42%, 91.93%, and 76.5% accuracy on the above datasets, respectively. Hence, results show that MOBA outperformed MOGA and MOPSO, and NBBA outperformed the niche multi-objective genetic algorithm and the niche multi-objective particle swarm optimization. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

22 pages, 572 KiB  
Article
From Complex System Analysis to Pattern Recognition: Experimental Assessment of an Unsupervised Feature Extraction Method Based on the Relevance Index Metrics
by Laura Sani, Riccardo Pecori, Monica Mordonini and Stefano Cagnoni
Computation 2019, 7(3), 39; https://doi.org/10.3390/computation7030039 - 09 Aug 2019
Cited by 5 | Viewed by 3614
Abstract
The so-called Relevance Index (RI) metrics are a set of recently-introduced indicators based on information theory principles that can be used to analyze complex systems by detecting the main interacting structures within them. Such structures can be described as subsets of the variables [...] Read more.
The so-called Relevance Index (RI) metrics are a set of recently-introduced indicators based on information theory principles that can be used to analyze complex systems by detecting the main interacting structures within them. Such structures can be described as subsets of the variables which describe the system status that are strongly statistically correlated with one another and mostly independent of the rest of the system. The goal of the work described in this paper is to apply the same principles to pattern recognition and check whether the RI metrics can also identify, in a high-dimensional feature space, attribute subsets from which it is possible to build new features which can be effectively used for classification. Preliminary results indicating that this is possible have been obtained using the RI metrics in a supervised way, i.e., by separately applying such metrics to homogeneous datasets comprising data instances which all belong to the same class, and iterating the procedure over all possible classes taken into consideration. In this work, we checked whether this would also be possible in a totally unsupervised way, i.e., by considering all data available at the same time, independently of the class to which they belong, under the hypothesis that the peculiarities of the variable sets that the RI metrics can identify correspond to the peculiarities by which data belonging to a certain class are distinguishable from data belonging to different classes. The results we obtained in experiments made with some publicly available real-world datasets show that, especially when coupled to tree-based classifiers, the performance of an RI metrics-based unsupervised feature extraction method can be comparable to or better than other classical supervised or unsupervised feature selection or extraction methods. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

21 pages, 9592 KiB  
Article
Invertible Autoencoder for Domain Adaptation
by Yunfei Teng and Anna Choromanska
Computation 2019, 7(2), 20; https://doi.org/10.3390/computation7020020 - 27 Mar 2019
Cited by 9 | Viewed by 5204
Abstract
The unsupervised image-to-image translation aims at finding a mapping between the source ( A ) and target ( B ) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires [...] Read more.
The unsupervised image-to-image translation aims at finding a mapping between the source ( A ) and target ( B ) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings F A B : A B and F B A : B A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., F A B ( F B A ( B ) ) B and F B A ( F A B ( A ) ) A . Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce F B A to be an inverse operation to F A B . We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

27 pages, 7889 KiB  
Article
Development of Simple-To-Use Predictive Models to Determine Thermal Properties of Fe2O3/Water-Ethylene Glycol Nanofluid
by Mohammad Hossein Ahmadi, Ali Ghahremannezhad, Kwok-Wing Chau, Parinaz Seifaddini, Mohammad Ramezannezhad and Roghayeh Ghasempour
Computation 2019, 7(1), 18; https://doi.org/10.3390/computation7010018 - 21 Mar 2019
Cited by 24 | Viewed by 4092
Abstract
Thermophysical properties of nanofluids play a key role in their heat transfer capability and can be significantly affected by several factors, such as temperature and concentration of nanoparticles. Developing practical and simple-to-use predictive models to accurately determine these properties can be advantageous when [...] Read more.
Thermophysical properties of nanofluids play a key role in their heat transfer capability and can be significantly affected by several factors, such as temperature and concentration of nanoparticles. Developing practical and simple-to-use predictive models to accurately determine these properties can be advantageous when numerous dependent variables are involved in controlling the thermal behavior of nanofluids. Artificial neural networks are reliable approaches which recently have gained increasing prominence and are widely used in different applications for predicting and modeling various systems. In the present study, two novel approaches, Genetic Algorithm-Least Square Support Vector Machine (GA-LSSVM) and Particle Swarm Optimization- artificial neural networks (PSO-ANN), are applied to model the thermal conductivity and dynamic viscosity of Fe2O3/EG-water by considering concentration, temperature, and the mass ratio of EG/water as the input variables. Obtained results from the models indicate that GA-LSSVM approach is more accurate in predicting the thermophysical properties. The maximum relative deviation by applying GA-LSSVM was found to be approximately ±5% for the thermal conductivity and dynamic viscosity of the nanofluid. In addition, it was observed that the mass ratio of EG/water has the most significant impact on these properties. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

19 pages, 404 KiB  
Article
Extreme Multiclass Classification Criteria
by Anna Choromanska and Ish Kumar Jain
Computation 2019, 7(1), 16; https://doi.org/10.3390/computation7010016 - 12 Mar 2019
Cited by 1 | Viewed by 4192
Abstract
We analyze the theoretical properties of the recently proposed objective function for efficient online construction and training of multiclass classification trees in the settings where the label space is very large. We show the important properties of this objective and provide a complete [...] Read more.
We analyze the theoretical properties of the recently proposed objective function for efficient online construction and training of multiclass classification trees in the settings where the label space is very large. We show the important properties of this objective and provide a complete proof that maximizing it simultaneously encourages balanced trees and improves the purity of the class distributions at subsequent levels in the tree. We further explore its connection to the three well-known entropy-based decision tree criteria, i.e., Shannon entropy, Gini-entropy and its modified variant, for which efficient optimization strategies are largely unknown in the extreme multiclass setting. We show theoretically that this objective can be viewed as a surrogate function for all of these entropy criteria and that maximizing it indirectly optimizes them as well. We derive boosting guarantees and obtain a closed-form expression for the number of iterations needed to reduce the considered entropy criteria below an arbitrary threshold. The obtained theorem relies on a weak hypothesis assumption that directly depends on the considered objective function. Finally, we prove that optimizing the objective directly reduces the multi-class classification error of the decision tree. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 1891 KiB  
Review
Machine-Learning Methods for Computational Science and Engineering
by Michael Frank, Dimitris Drikakis and Vassilis Charissis
Computation 2020, 8(1), 15; https://doi.org/10.3390/computation8010015 - 03 Mar 2020
Cited by 101 | Viewed by 13275
Abstract
The re-kindled fascination in machine learning (ML), observed over the last few decades, has also percolated into natural sciences and engineering. ML algorithms are now used in scientific computing, as well as in data-mining and processing. In this paper, we provide a review [...] Read more.
The re-kindled fascination in machine learning (ML), observed over the last few decades, has also percolated into natural sciences and engineering. ML algorithms are now used in scientific computing, as well as in data-mining and processing. In this paper, we provide a review of the state-of-the-art in ML for computational science and engineering. We discuss ways of using ML to speed up or improve the quality of simulation techniques such as computational fluid dynamics, molecular dynamics, and structural analysis. We explore the ability of ML to produce computationally efficient surrogate models of physical applications that circumvent the need for the more expensive simulation techniques entirely. We also discuss how ML can be used to process large amounts of data, using as examples many different scientific fields, such as engineering, medicine, astronomy and computing. Finally, we review how ML has been used to create more realistic and responsive virtual reality applications. Full article
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
Show Figures

Figure 1

Back to TopTop