Next Article in Journal
Boosting Deep Reinforcement Learning Agents with Generative Data Augmentation
Previous Article in Journal
Real-Time Machine Learning for Human Activities Recognition Based on Wrist-Worn Wearable Devices
Previous Article in Special Issue
Effect of Calcium Hydroxide on Compressive Strength and Microstructure of Geopolymer Containing Admixture of Kaolin, Fly Ash, and Red Mud
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Supervised Machine Learning Model for Regression to Predict Melt Pool Formation and Morphology in Laser Powder Bed Fusion

1
Department of Engineering Science, Guglielmo Marconi University, 00193 Rome, Italy
2
Department of Industrial, Electronic and Mechanical Engineering, Roma Tre University, 00146 Rome, Italy
3
Baker Hughes, Nuovo Pignone, 50127 Florence, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 328; https://doi.org/10.3390/app14010328
Submission received: 2 November 2023 / Revised: 22 December 2023 / Accepted: 27 December 2023 / Published: 29 December 2023

Abstract

:
In the additive manufacturing laser powder bed fusion (L-PBF) process, the optimization of the print process parameters and the development of conduction zones in the laser power (P) and scanning speed (V) parameter spaces are critical to meeting production quality, productivity, and volume goals. In this paper, we propose the use of a machine learning approach during the process parameter development to predict the melt pool dimensions as a function of the P/V combination. This approach turns out to be useful in speeding up the identification of the printability map of the material and defining the conduction zone during the development phase. Moreover, a machine learning method allows for an accurate investigation of the most promising configurations in the P-V space, facilitating the optimization and identification of the P-V set with the highest productivity. This approach is validated by an experimental campaign carried out on samples of Inconel 718, and the effects of some additional parameters, such as the layer thickness (in the range of 30 to 90 microns) and the preheating temperature of the building platform, are evaluated. More specifically, the experimental data have been used to train supervised machine learning models for regression using the KNIME Analytics Platform (version 4.7.7). An AutoML (node for regression) tool is used to identify the most appropriate model based on the evaluation of R2 and MAE scores. The gradient boosted tree model also performs best compared to Rosenthal’s analytical model.

1. Introduction

The introduction of direct metal laser sintering (DMLS) technology by EOS in 1994 was a consistent technological development that progressively evolved into laser powder bed fusion (L-PBF) technology referred to as powder bed fusion–laser melting (PBF-LM) in ISO/ASTM 52900 [1] as OEMs successfully adapted their selective laser sintering (SLS) technologies to take advantage of the high-power fiber lasers available in the late 1990s.
The ability to melt metal alloys of interest and create custom parts with a high degree of geometrical complexity that otherwise could not have been produced through traditional means led to many technological advancements. A host of technological advancements were made, not just in printing technology and the associated hardware but also in the equally important ecosystem of software tools supporting the manufacturing of viable industry-grade parts [2,3,4].
For the early commercial adopters of the L-PBF technology, primarily in medical applications, the focus was predominantly on printing novel organic designs of relatively smaller sizes and a simpler geometric complexity [5]. Success was achieved in most cases in an iterative experimental manner. The applications were selected such that the increased performance and the benefits of the new additively made product over traditional variants would justify the higher development and production costs.
Subsequently, within the last five to ten years, technological advances in metal printing have facilitated the exponential growth of L-PBF AM in numerous sectors, including medicine, aviation, energy, oil and gas, and energy. Industry has made substantial investments in the development of industrial applications where the printed components were progressively more complex, large, and required to adhere to more rigorous quality and functionality standards. Following that, it became imperative to incorporate design for additive manufacturing (DfAM) principles into the product design process flow. This was further supported by the utilization of multiphysics simulation software for performance validation and shape and topology optimization, in order to attain product differentiation at a competitive price.
In addition to the main design and printer loading considerations, the cost structure on the production side is significantly influenced by the ability to perform a high-resolution optimization of the print process parameter set for a specific type of machine and material. This optimization process aims to identify a set of optimal parameters that maximize the production throughput while delivering a product with a microstructure, density, defect density, and surface finish that require minimal post-processing.
To achieve that, primarily, a robust and stable production process needs to be established by determining the limits of the L-BPF printability zone in the laser power (P)–scanning speed (V) variable space for a particular material and printer type [3,6,7,8,9,10]. This is necessary to avoid the formation of defects, primarily a lack of fusion and balling [11,12,13] and keyhole phenomena associated with higher energy inputs [14,15]. Once the limits have been determined without conservatively over-restricting the usable printability region, the second step is developing a set of optimal parameters within this space. This identification of the printability region [16] turns out to be a process that requires a traditionally laborious, time-consuming, and expensive experimental effort. Furthermore, time and cost constraints frequently limit the resolution with which the variable space is surveyed [17,18] (i.e., the number of specimens used to evaluate the quality of the printing process). Notwithstanding this, the growing demand for L-PBF to enhance process productivity has resulted in a heightened frequency of parameter developments and the determination of the printability region of the material. In order to alter the laser beam profile or spot size, many techniques have been implemented, including ring mode or laser defocusing [19,20,21,22], as well as the utilization of a multilaser machine [23].
Thus, a model that allows for the prediction of melt pool morphology and formation is significantly needed to speed up and optimize the identification of the printability map in the L-PBF process. The possibility of obtaining preliminary information on melt pool shapes as a function of the laser power and scanning speed combination without any printing could significantly reduce the number of physical experiments, thereby strongly reducing the number of microstructural analyses on the material, which are expensive and time-consuming. Moreover, the model could improve the choice of the best laser power and scanning speed, allowing for a very accurate investigation of the P-V space.
This work aims to use a supervised machine learning (ML) model for regression to identify the best model able to synthesize the experimental data on melt pool shapes. The application of ML models to predict several phenomena in the L-PBF process has been carried out by researchers in the last few years [23,24,25]. For instance, in previous literary studies, the employment of ML models allowed for the evaluation of the keyhole and balling phenomena [13], which are difficult to predict using the conventional FEA methods. Moreover, the surrogate model [26] enabled the prediction of melt pool geometry as a function of the laser power, scanning speed, and beam spot size for a 316 L stainless steel case study. In addition, some researchers [27,28,29,30,31,32] have implemented machine learning-based monitoring of the L-PBF process based on in situ laser single-track videos, allowing for the possibility to predict track widths with a high accuracy (R2 of 0.93). In [30], the goodness of several machine learning models were evaluated to predict the melt pool behavior on a wide range of metal alloys and consider different AM processes, such as L-PBF and direct energy deposition (DED). Despite all these promising results, the effect of some parameters, such as the layer thickness and building platform temperature, needs to be investigated more deeply. The layer thickness turns out to be one of the most important parameters to be increased to enhance the low productivity of this process [2,33] and to optimize the application of some printing strategies such as hull bulk [34]. Thus, the possibility to predict the melt pool formation and consequently the identification of the printability window (usually narrower as the thickness increases [35]) for high layer thickness parameters through an ML model results fundamentally in matching the major L-PBF needs. In addition, the effect of the preheating temperature is a very important parameter to be analyzed and investigated [36,37,38,39]. This parameter’s results are necessary to reduce part distortions and material residual stresses, minimize heat loss, and improve adhesion between the first deposed powder layers between the part and the building platform for melt pool formation. Thus, this work aims to evaluate, using KNIME Analytic Software Version 4.7.7 through the AutoML Regression tool, the most appropriate model able to synthesize the experimental data on melt pool shapes. Moreover, in order to provide an accurate and large enough dataset to properly train the ML models evaluated in this case study, numerous experimental campaigns on Inconel 718 alloy specimens were carried out, considering the effects of the variations in laser power, scanning speed, layer thickness, and building platform temperature.
The structure of this paper is organized as follows: Section 2 describes the experimental test settings and the models evaluated and analyzed using the AutoML Regression tool. Section 3 shows the results achieved for each case study in terms of physical results, predicted results as a function of the regression model evaluated (by AutoML), and Rosenthal solution results. The experimental outcomes and the assessed precision of the regression models are detailed in Section 4. In Section 5, the conclusion and suggested advancements for subsequent and ongoing inquiries are outlined.

2. Materials and Methods

2.1. Experimental Test Settings

The experimental campaign is conducted on specimens manufactured of Inconel 718, which is a nickel-based alloy widely utilized in the L-PBF process for component fabrication. The mechanical–thermal properties and chemical composition of Inconel 718 are evaluated in accordance with information from prior studies [16,22,36].
All experiments are conducted using a Renishaw AM500Q machine (Renishaw Ltd., Gloucestershire, UK) equipped with four ytterbium fiber lasers. These lasers have a minimum spot size of 82 µm and a beam wavelength of 1070 nm. A maximum of 500 W of laser power is available for each independent laser.
In every case study, the specimens utilized for the analysis are modeled according to the following specification (Figure 1). In more detail, the specimens are modeled as parallelepipeds of size 10 × 10 × 2 mm, which is suitable to print ten single tracks [2,4,18,40,41,42,43,44,45,46,47,48,49,50] on a stable consolidated material [36]. The distance between the tracks is established in order to prevent the adjacent melt pool from overlapping, whereas the track length is maintained at 7 mm to provide a steady-state melt pool.
The specimens are positioned on the building platform using Materials Magics 25.1 (Materialise NV, Leuven, Belgium). The process parameters and laser assignment are carried out using Renishaw Quantam Software Version 5.3.0.7015.
Following the printing, the specimens are removed off the building platform using a wire EDM machine (ECUT EU MS Genesi). Then, each specimen is prepared for micrographic analyses following the procedure explained in [36], performing the following steps:
  • To acquire an appropriate portion for analysis, each specimen is sectioned using a Struers Secotom-20 machine at a distance of 4 mm from the border of the lateral surface (Figure 1). This operation is essential for analyzing the melt pool in its steady state, excluding edge effects.
  • Using a Struers CitoPress-30 machine, the specimen segment is embedded in conductive resin.
  • The specimen’s surface is polished with a Struers Tegramin-30 machine.
The specimens are prepared for the melt pool analysis following the procedure described in [16], while the analyses are carried out using an Optical Microscope Leica Leitz DMRME (Leica Microsystems GmbH, Wetzlar, Germany). The complete procedure used to perform the melt pool analysis is described in [16,22,36]. Basically, in order to calculate the powder melting regime, the melt pool depth and width are measured [5,13]. In particular, for each specimen, the melt pool depth and width are measured for five random single tracks in a section obtained at a 4 mm distance from the lateral edge to analyze a melt pool in its steady-state condition. Table 1, Table 2 and Table 3 show the process parameters tested for each case study carried out in this work. Thus, the process parameters taken into account in this work to predict the melt pool shape are the scanning speed, laser power, layer thickness, and building platform temperature.

2.2. Machine Learning Workflow

The whole machine learning cycle is carried out using the open-source software KNIME Analytics Platform. The workflow is described in Figure 2, and it consists of different phases: data preparation, partitioning, parameter optimization with cross-validation, the selection of the best model, the execution of the prediction, final scoring, and the evaluation of the model. A subworkflow named “AutoML (Regression)” is used to carry out the parameter optimization with cross-validation and the best model selection.
To evaluate the goodness of the model, we first divide the data into two partitions, using one set as a verification comparison in the final phase. We split the data into 80% for the training phase and 20% for verifying and evaluating the prediction accuracy. The division of the data into the two partitions, “Train” and “Test”, is based on a stratified sampling technique for the target class and 80%.
This workflow includes the Auto ML component (Regression), a complete encapsulated workflow designed to identify the best model for the subsample of data sent to the node. This workflow uses the first partition of the data (the individual measurements are provided to the Auto ML component as input data) as a subsample to split into learner and predictor splits. The first subset (80% of the data) is, in fact, partitioned once again into 80% and 20% to learn and (therefore) train the model within the Auto ML Regression component.
After the data preprocessing operations phase, the data subsample is cleaned by replacing missing values, prepared, and normalized using Z-score normalization. Then, the data subsample is split into two parts using the stratified sampling method on the target class, with 80% of the data going to the learner partition and the other 20% going to the predictor partition. Finally, machine learning models for regression are compared using cross-validation to tune a set of parameters and the R2 metric on the train data.
The KNIME system is based on a no-code language (or low-code language, with minimal corrections); each node represents a function. It is possible to join and automate node systems through aggregates called “components”. This specific component is based on the automatic training of supervised machine learning models. In this case, it is an AutoML component specifically designed for regression. It derives from the nodes AUTOML originally designed for classification. The component is a complete workflow capable of automating the entire ML selection cycle. It performs complete data preparation, detailed parameter optimization with cross-validation for each ML model, detailed scoring, and an automatic evaluation and selection.
These are the detailed steps inside the AutoML (Regression) component:
  • Data preparation: Before training the models, the data are corrected by replacing the missing values with the most frequent ones in the categorical column or with the average for the columns that present numerical data. All numeric features are then converted to doubles. Next, we normalize them using Z-score normalization and automatically divide them into the two training and test partitions, as previously mentioned (80% and 20%).
  • Model training: Each ML model has a set of parameters that can be tuned and set within the component using previously user-defined evaluation metrics on train data and cross-validation. The practice of cross-validation is to take a dataset and randomly divide it into an even number of segments. These segments are called folds. The machine learning algorithm is trained on all folds except one, and each fold is tested against a model trained on all other folds. This process means that all trained models are tested on sectors they have not seen before. The process is repeated until testing is conducted on all folds at least once.
  • Model scoring and selection: Once all the models are trained, the system applies the model to the test set. The predictions are then compared with real data, and it is thus possible to calculate the actual performance of the created model. The best model is finally selected based on user-defined scores and metrics (e.g., R2 or mean absolute errors).
The component returns a set of models with the best parameters with which it is possible to obtain the best results. To learn more, it is possible to deeply “enter” inside the component and manually modify the various automations to find the optimal set of parameters. In the present case, the “Keras Deep Learning” nodes were started by hand for better optimization of the required Python environment (function otherwise not available).
The results, which can later be used for operational end-to-end workflows, allow us to focus attention on specific models that are the most suitable. It is advisable, however, to reiterate the use of AutoML Regression over time since the same data are integrated by the new ones in time, which can change the model better. The machine learning models evaluated through AutoML Regression are the following:
  • Regression tree;
  • Linear regression;
  • Polynomial regression;
  • XGBoost linear ensemble;
  • XGBoost tree ensemble;
  • Gradient boosted trees;
  • Random forest;
  • Deep learning (Keras).
After that, the specified models are properly trained and stored on a single table, and the model is applied to the test set by the system. Then, the predictions from all models are compared with experimental data, and performance metrics are calculated. At the end of the workflow, the system selects the best model. It is applied to the test set by the system through the node Workflow Executor, which automatically predicts the first 20% of the subset of the test set using the best model obtained by the component AutoML (Regression). At this point, it is also possible to automatically build a workflow already optimized with the best model and the correct parameter to directly obtain the results’ previsioning to the new set of data.

2.3. Models Considered by AutoML for Regression

2.3.1. Regression Tree

Regression trees are decision trees in which the target variables can assume continuous values instead of class labels in leaves. In this model, split selection criteria and stopping criteria are modified. This model uses nodes, branches, and leaves to divide and organize the data into subsets. Regression trees work like a decision tree, selecting splits that reduce the dispersion of the target attribute value more. So, it is possible to predict the target attribute from its mean value in the leaves. The high human readability of these algorithms facilitates their proliferation and widespread use. Moreover, the regression trees model not only performs a prediction of attribute values for the targets but also provides an explanation of which attributes are used and how these are used to realize the prediction. This model is trained in KNIME with the optimized parameter “Minimum number of records per node”.

2.3.2. Linear Regression

In linear regression, the relationship between two variables is modeled through the fitting of a linear equation to observed and measured data, considering one variable as an explanatory variable and the other one as a dependent variable. Firstly, before attempting to fit a linear model to observed data, it is necessary to determine whether or not there is a relationship between the investigated variables. In particular, a linear regression and its derived plot, a scatterplot, could be powerful tools to determine the strength of the relationship between two variables. It is important to highlight that if there is no remarkable association between the two variables to be correlated, the explanatory and dependent variables, fitting a linear regression model to the observed data will not provide a reliable and useful model. The strength level of the relationship between the two variables is measured through a correlation coefficient, which is a value between −1 and 1. Linear regression is trained in KNIME with the default parameters.

2.3.3. Polynomial Regression

In polynomial regression, the relationship between two variables, the explanatory and dependent variables, is expressed through an nth-degree polynomial. The use of polynomial regression allows for the fitting of a nonlinear relationship between the explanatory variable and the conditional mean of the dependent variable, generally corresponding to the least-square method. In particular, this approach (least-square method) minimizes the coefficient variance according to the Guass–Markov Theorem. A polynomial regression is a type of linear regression where the relationship between the explanatory and dependent variables is curvilinear; in this case, a polynomial equation is fitted to the observed data. Polynomial regression is trained in KNIME with the optimized parameter “Polynomial degree”.

2.3.4. XGBoost (Tree and Linear Ensemble)

XGBoost [51] is an open-source, optimized distributed gradient boosting library. It provides an effective implementation of the gradient boosting algorithm, using a minimal amount of resources. XGBoost is commonly used for supervised learning problems to predict a target variable. In more detail, it examines the distribution of features at all points considered and uses this information to reduce the search space of possible feature splits, allowing many hyperparameters to be set and performing additive optimization on the gradient boosting model. XGBoost is used in different configurations in KNIME, based on the default booster type. In the AutoML (Regression) component, it can be set to the following configurations:
-
XGBoost tree ensemble: This is the default configuration for XGBoost (booster = gbtree). The parameters for the training are the “eta” (step size shrinkage used in the update to prevent overfitting) and “max depth” (increasing the value makes the model more likely to overfit).
-
XGBoost linear ensemble: This is the “linear” configuration for XGBoost (booster = gblinear). The parameters for the training in this case are “alpha” and “lambda” (both are regularization terms on weights).

2.3.5. Gradient Boosted Trees

Gradient boosting is a methodology applied on top of another machine learning algorithm. In this algorithm, two types of models are involved, a “weak” machine learning model (typically a decision tree) and a “strong” machine learning model (composed of multiple weak models). In order to predict the error of the current strong model (pseudo-response), in this methodology, at each step, the training of a new weak model is carried out. Then, the weak model (considered to be the error) is added to the strong model with a negative sign in order to reduce the error of the strong model. Gradient boosting is an iterative method, and each iteration invokes the formula F i + 1 = F i f i , where F i is the strong model and f i is the weak model at step “i”.
This operation is repeated until a stopping criterion is met, as in the case of achieving a maximum number of iterations or if the strong model begins to overfit as measured on a separate validation dataset. The gradient boosted tree is trained in KNIME with the optimized parameter “Number of trees”.

2.3.6. Random Forest

The random forest is a technique for supervised machine learning derived from the decision tree algorithm, and it is commonly used to solve regression and classification problems. The ensemble learning technique, which combines many classifiers to provide solutions to complex problems, is used in this algorithm. In more detail, a random forest algorithm consists of many decision trees, and the forest generated by this algorithm is trained through bagging and bootstrap aggregating. In a random forest algorithm, the result or outcome is established based on the prediction obtained through the decision trees. The average of the output obtained from various trees is evaluated and considered to realize the prediction; while as the number of trees increases, the level of accuracy of the predictions increases.
The drawbacks of decision tree algorithms can be eliminated by using a random forest approach, which minimizes dataset overfitting and produces predictions without requiring a lot of package configurations.
This algorithm in KNIME is trained with the optimized parameters “Tree Depth”, “Number of models”, and “Minimum child node size”.

2.3.7. Deep Learning (Keras)

On the TensorFlow platform, Keras turns out to be the high-level application program interface (API), providing an approachable, highly productive interface to solve ML problems, specifically focusing on modern deep learning.
Every step of the machine learning workflow is covered using this high-level API, from data processing to hyperparameter tuning to deployment. The development of Keras is mainly focused on the possibility of enabling fast experimentation and saving computational time.
Moreover, this interface is designed with the scope of reducing the cognitive load by achieving goals such as a simple and consistent interface, reducing the number of steps needed for typical use cases, giving clear and useful error messages, and helping the user to write concise and readable code. The core data structures of Keras are composed of components and layers, with a layer being a simple input/output transformation, while a component is a directed acyclic graph of layers.
This high-level API is trained with KNIME Deep Learning—Keras Integration with no parameter optimization and two simple architectures for binary and multiclass classification determined by a few simple heuristics.

3. Results

3.1. Melt Pool Analyses Results

Table 4, Table 5 and Table 6 show the melt pool analysis results for each case study investigated in this work. The melt pool shape is measured in terms of the melt pool depth and width in terms of the average and standard deviation of the five single tracks analyzed for each sample. Table 6 reports the experimental data obtained for a layer thickness of 30 µm, Table 7 reports those for a layer thickness of 60 µm, and Table 8 reports those for a layer thickness of 90 µm.

3.2. Machine Learning Model for Regression Results

The performance of the models analyzed by AutoML is evaluated in terms of the R2 for correlation and mean absolute error (MAE) for model errors versus experimental data. The results obtained for each model for melt pool depth and width prediction are listed in Table 7 and Table 8 with respect to the depth and width, respectively.
Table 7. Results in terms of R2 and MAE of the different ML models tested considering the melt pool depth.
Table 7. Results in terms of R2 and MAE of the different ML models tested considering the melt pool depth.
ModelR2MAE
Gradient boosted trees0.9539.382
Deep learning (Keras)0.94110.403
Regression tree0.74911.454
Polynomial regression0.70613.167
XGBoost tree ensemble0.66713.219
Random forest0.64913.905
Linear regression0.47219.103
XGBoost linear ensemble0.12526.319
Table 8. Results in terms of R2 and MAE of the different ML models tested considering the melt pool width.
Table 8. Results in terms of R2 and MAE of the different ML models tested considering the melt pool width.
ModelR2MAE
Gradient boosted trees0.75110.59
Deep learning (Keras)0.74911.454
Regression tree0.74611.392
Polynomial regression0.70613.167
XGBoost tree ensemble0.66713.219
Random forest0.64913.905
Linear regression0.47219.103
XGBoost linear ensemble0.12526.319
The best model identified for melt pool depth and width prediction is the gradient boosted trees with an optimized number of trees equal to 90 and 70. In Figure 3 and Figure 4 are shown, through a line plot, the comparison between the gradient boosted trees model prediction and the experimental results for the melt pool depth and width.

3.3. Results of Rosenthal Analytical Model

The results achievable with Rosenthal’s analytical model [2] were calculated and compared with the experimental data obtained in the case study (Section 3.1). In this way, they can be used as a term of comparison for the model developed in this work through the ML model. In Table 9, the R2 and MAE calculated for the Rosenthal solution for the melt pool depth and width, respectively, are obtained, providing as input the process parameters investigated in these case studies, including the layer thickness, in the range of 30 to 90 µm, and the building platform temperature variation from 80 to 170 °C.

4. Discussion

The experimental results obtained for each case study, as shown in Table 6, Table 7 and Table 8, are aligned with those obtained in the previous literature [36], thus confirming the robustness of the physical tests carried out and the methodology applied to measure the melt pool dimensions. Furthermore, from the evaluation of the melt pool morphology and analysis of the printability maps derived for each case study, it is possible to observe that the printability region becomes narrower as the layer thickness is increased, emphasizing the criticality of printing high-layer-thickness parts in the L-PBF process [33,34,35].
The evaluation of the physical and ML model results is shown in Figure 3 and Figure 4; it is possible to highlight a good match for both the melt pool depth and width. From the comparison of the predictions for the melt pool depth made by the best ML regression model, the gradient boost tree, and the Rosenthal solution side by side, we can see that both models are good at correlating and being reliable, with R2 values of 0.95 and 0.82, respectively. In contrast, in terms of MAE, the gradient boosted tree, with a calculated MAE of 9.38, turns out to be significantly better than the Rosenthal solution, with a calculated MAE of 37.71. The gradient boosted tree is confirmed to be significantly more accurate and reliable for melt pool width prediction, with an R2 score of 0.75 higher than the calculated score for Rosenthal solution of 0.54. More specifically, the ML model exhibits a considerable improvement, with a 10-fold reduction in MAE compared to the Rosenthal solution (10.86 against 108.9). The shown (achieved) results are of particular interest due to the possibility of predicting the melt pool shape over a wide range of layer thickness values. This can speed up the development of high-productivity process parameters or facilitate the application of a printing strategy such as hull bulk. In addition, this ML model’s ability to predict how the shape of the melt pool will change as the preheating temperature changes in the 80 °C to 170 °C range that most commercial L-PBF machines are used in is essential for fully understanding how melt pools form. In this sense, it is important to emphasize that the highest preheating temperature is usually set for the machines with multiple lasers to minimize the greater heat loss due to the presence of multiple lasers.

5. Conclusions

In the additive manufacturing L-PBF process, optimizing the print process parameters and developing conduction zones in the laser power vs. scanning speed parameter space are critical to meeting production quality, productivity, and volume goals. In this work, we propose using an ML approach during the process parameters development to predict the melt pool dimensions as a function of some parameters less studied in the literature (i.e., layer thickness and building platform temperature). Using a supervised machine learning model for regression with the KNIME Analytics Platform permits the identification of the most appropriate model able to synthesize the experimental data on melt pool shape obtained with a large campaign on a Inconel 718 superalloy. This makes it possible to speed up the development of the process parameters for Inconel 718 alloy in the large layer thickness range of 30 to 90 microns. This is an aspect that results in remarkable interest because it could make the L-PBF process more productive and make it easier to use some printing strategies, like the hull bulk one. Furthermore, this ML regression model turns out to have good predictability for keyhole melting regimes and those that lack fusion powder, allowing for a very accurate definition of the Inconel 718 printability map as a function of layer thickness and building platform temperature variation. This model can predict with a high accuracy the melt pool depth and the configuration in which melt pool formation is governed by the keyhole melting regime, which is difficult to model physically. All these advantages permit the improvement of the accuracy and efficiency of the printability map definition for a material printed using the L-PBF technique. Further research activities could be useful to investigate multiple levels of preheating temperatures and evaluate other superalloys. In future works, it could also be interesting to evaluate the robustness of the proposed approach with different superalloys and different machines, as well as other process parameters not evaluated in this paper, to develop a more generalized ML model.

Author Contributions

Conceptualization, N.B., A.G., A.P. and M.P.; Data curation, N.B., A.G. and A.P.; Formal analysis, N.B. and A.G.; Funding acquisition, I.G. and P.C.; Investigation, N.B., A.G. and A.P.; Methodology, N.B., A.G. and M.P.; Project administration, I.G.; Resources, M.P.; Software, N.B. and A.G.; Supervision, I.G., G.A. and P.C.; Validation, M.P. and I.G.; Visualization, N.B. and A.G.; Writing—original draft, N.B., A.G., A.P. and M.P.; Writing—review and editing, N.B., A.G., M.P. and G.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Summarized data are contained within the article. The raw data are available on request from the corresponding author after obtaining the permission of Baker Hughes, Nuovo Pignone.

Conflicts of Interest

Author Marco Palladino and Iacopo Giovannetti were employed by the company Baker Hughes, Nuovo Pignone. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. ISO/ASTM 52900; Additive Manufacturing—General Principles—Fundamentals and Vocabular. ISO: Geneva, Switzerland, 2021.
  2. Singh, S.N.; Chowdhury, S.; Nirsanametla, Y.; Deepati, A.K.; Prakash, C.; Singh, S.; Wu, L.Y.; Zheng, H.Y.; Pruncu, C. A Comparative Analysis of Laser Additive Manufacturing of High Layer Thickness Pure Ti and Inconel 718 Alloy Materials Using Finite Element Method. Materials 2021, 14, 876. [Google Scholar] [CrossRef] [PubMed]
  3. Metelkova, J.; Kinds, Y.; Kempen, K.; de Formanoir, C.; Witvrouw, A.; Van Hooreweder, B. On the influence of laser defocusing in Selective Laser Melting of 316L. Addit. Manuf. 2018, 23, 161–169. [Google Scholar] [CrossRef]
  4. Makona, N.W.; Yadroitsava, I.; Moller, H.; Tlotleng, M.; Yadroitsev, I. Evaluation of single tracks of 17-4PH steel manufactured at different power densities and scanning speeds by selective laser melting. S. Afr. J. Ind. Eng. 2016, 27, 210–218. [Google Scholar] [CrossRef]
  5. Oliveira, J.P.; LaLonde, A.D.; Ma, J. Processing parameters in laser powder bed fusion metal additive manufacturing. Mater. Des. 2020, 193, 108762. [Google Scholar] [CrossRef]
  6. Ökten, K.; Biyikŏglu, A. Development of thermal model for the determination of SLM process parameters. Opt. Laser Technol. 2021, 137, 106825. [Google Scholar] [CrossRef]
  7. Koutiri, I.; Pessard, E.; Peyre, P.; Amlou, O.; Terris, D.T. Influence of SLM process parameters on the surface finish, porosity rate and fatigue behavior of as-built Inconel 625 parts. J. Mater. Process. Technol. 2018, 255, 536–546. [Google Scholar] [CrossRef]
  8. Wang, Z.; Xiao, Z.; Tse, Y.; Huang, C.; Zhang, W. Optimization of processing parameters and establishment of a relationship between microstructure and mechanical properties of SLM titanium alloy. Opt. Laser Technol. 2019, 112, 159–167. [Google Scholar] [CrossRef]
  9. Khorasani, A.M.; Gibson, I.; Awan, U.S.; Ghaderi, A. The effect of SLM process parameters on density, hardness, tensile strength and surface quality of Ti-6Al-4V. Addit. Manuf. 2019, 25, 176–186. [Google Scholar] [CrossRef]
  10. Tian, Y.; Tomus, D.; Rometsch, P.; Wu, X. Influences of processing parameters on surface roughness of Hastelloy X produced by selective laser melting. Addit. Manuf. 2017, 13, 103–112. [Google Scholar] [CrossRef]
  11. Ning, J.; Wang, W.; Zamorano, B.; Liang, S.Y. Analytical modeling of lack-of-fusion porosity in metal additive manufacturing. Appl. Phys. 2019, 125, 797. [Google Scholar] [CrossRef]
  12. Mukherjee, T.; DebRoy, T. Mitigation of lack of fusion defects in powder bed fusion additive manufacturing. J. Manuf. Process. 2018, 36, 442–449. [Google Scholar] [CrossRef]
  13. Johnson, L.; Mahmoudi, M.; Zhang, B.; Seede, R.; Huang, X.; Maier, J.T.; Maier, H.J.; Karaman, I.; Elwany, A.; Arróyave, R. Assessing printability maps in additive manufacturing of metal alloys. Acta Mater. 2019, 176, 199–210. [Google Scholar] [CrossRef]
  14. Tenbrock, C.; Fischer, F.G.; Wissenbach, K.; Schleifenbaum, J.H.; Wagenblast, P.; Meiners, W.; Wagner, J. Influence of key-hole and conduction mode melting for top-hat shaped beam profiles in laser powder bed fusion. J. Mater. Process. Technol. 2020, 278, 116514. [Google Scholar] [CrossRef]
  15. King, W.A.; Barth, H.D.; Castillo, V.M.; Gallegos, G.F.; Gibbs, J.W.; Hahn, D.E.; Kamath, C.; Rubenchik, A.M. Observation of keyhole-mode laser melting in laser powder-bed fusion additive manufacturing. J. Mater. Process. Technol. 2014, 214, 2915–2925. [Google Scholar] [CrossRef]
  16. Giorgetti, A.; Baldi, N.; Palladino, M.; Ceccanti, F.; Arcidiacono, G.; Citti, P. A Method to Optimize Parameters Development in L-PBF Based on Single and Multitracks Analysis: A Case Study on Inconel 718 Alloy. Metals 2023, 13, 306. [Google Scholar] [CrossRef]
  17. Oliveira, J.P.; Santos, T.G.; Miranda, R.M. Revisiting fundamental welding concepts to improve additive manufacturing: From theory to practice. Prog. Mater. Sci. 2020, 107, 100590. [Google Scholar] [CrossRef]
  18. Mukherjee, T.; Zuback, J.S.; De, A.; DebRoy, T. Printability of alloys for additive manufacturing. Sci. Rep. 2016, 6, 19717. [Google Scholar] [CrossRef]
  19. Sow, M.C.; De Terris, T.; Castelnau, O.; Hamouche, Z.; Coste, F.; Fabbro, R.; Peyre, P. Influence of beam diameter on Laser Powder Bed Fusion (L-PBF) process. Addit. Manuf. 2020, 36, 101532. [Google Scholar] [CrossRef]
  20. Grünewald, J.; Gehringer, F.; Schmöller, M.; Wudy, K. Influence of ring-shaped beam profiles on process stability and productivity in laser-based powder bed fusion of AISI 316L. Metals 2021, 11, 1989. [Google Scholar] [CrossRef]
  21. Rasch, M.; Roider, C.; Kohl, S.; Strauß, J.; Maurer, N.; Nagulin, K.Y.; Schmidt, M. Shaped laser beam profiles for heat conduction welding of aluminium-copper alloys. Opt. Lasers Eng. 2019, 115, 179–189. [Google Scholar] [CrossRef]
  22. Baldi, N.; Giorgetti, A.; Palladino, M.; Giovannetti, I.; Arcidiacono, G.; Citti, P. Study on the Effect of Inter-Layer Cooling Time on Porosity and Melt Pool in Inconel 718 Components Processed by Laser Powder Bed Fusion. Materials 2023, 16, 3920. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, Z.; Liu, Z.; Wu, D. Prediction of melt pool temperature in directed energy deposition using machine learning. Addit. Manuf. 2021, 37, 101692. [Google Scholar] [CrossRef]
  24. Lee, S.; Peng, J.; Shin, D.; Choi, Y.S. Data analytics approach for melt-pool geometries in metal additive manufacturing. Sci. Technol. Adv. Mater. 2019, 20, 972–978. [Google Scholar] [CrossRef] [PubMed]
  25. Ogoke, F.; Farimani, A.B. Thermal control of laser powder bed fusion using deep reinforcement learning. Addit. Manuf. 2021, 46, 102033. [Google Scholar] [CrossRef]
  26. Tapia, G.; Khairallah, S.; Matthews, M.; King, W.E.; Elwany, A. Gaussian process based surrogate modeling framework for process planning in laser powder-bed fusion additive manufacturing of 316L stainless steel. Int. J. Adv. Manuf. Technol. 2018, 94, 3591–3603. [Google Scholar] [CrossRef]
  27. Scime, L.; Beuth, J. Using machine learning to identify in-situ melt pool signatures indicative of flaw formation in a laser powder bed fusion additive manufacturing process. Addit. Manuf. 2019, 25, 151–165. [Google Scholar] [CrossRef]
  28. Yuan, B.; Guss, G.M.; Wilson, A.C.; Hau-Riege, S.P.; DePond, P.J.; McMains, S.; Matthews, M.J.; Giera, B. Machine-learning-based monitoring of laser powder bed fusion. Sci. Technol. Adv. Mater. 2018, 3, 1800136. [Google Scholar] [CrossRef]
  29. Akbari, P.; Ogoke, F.; Kao, N.Y.; Meidani, K.; Yeh, C.Y.; Lee, W.; Farimani, A.B. MeltpoolNet: Melt pool characteristic prediction in Metal Additive Manufacturing using machine learning. Addit. Manuf. 2022, 55, 102817. [Google Scholar] [CrossRef]
  30. Childs, T.H.C.; Hauser, C.; Badrossamay, M. Mapping and Modelling Single Scan Track Formation in Direct Metal Selective Laser Melting. CIRP Ann. 2004, 53, 191–194. [Google Scholar] [CrossRef]
  31. Kappes, B.; Moorthy, S.; Drake, D.; Geerlings, H.; Stebner, A. Machine learning to optimize additive manufacturing parameters for laser powder bed fusion of Inconel 718. In Proceedings of the 9th International Symposium on Superalloy 718 Derivatives: Energy, Aerospace, and Industrial Applications, Pittsburgh, PA, USA, 3–6 June 2018; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 595–610. [Google Scholar]
  32. Mahmoud, D.; Magolon, M.; Boer, J.; Elbestawi, M.A.; Mohammadi, M.G. Applications of machine learning in process monitoring and controls of L-PBF additive manufacturing: A review. Appl. Sci. 2021, 11, 11910. [Google Scholar] [CrossRef]
  33. Leicht, A.; Fischer, M.; Klement, U.; Nyborg, L.; Hryha, E. Increasing the Productivity of Laser Powder Bed Fusion for Stainless Steel 316L through Increased Layer Thickness. J. Mater. Eng. Perform. 2021, 30, 575–584. [Google Scholar] [CrossRef]
  34. De Formanoir, C.; Paggi, U.; Colebrants, T.; Thijs, L.; Li, G.; Vanmeensel, K.; Van Hooreweder, B. Increasing the productivity of laser powder bed fusion: Influence of the hull-bulk strategy on part quality, microstructure and mechanical performance of Ti-6Al-4V. Addit. Manuf. 2020, 33, 101129. [Google Scholar] [CrossRef]
  35. Shoukr, D.; Morcos, P.; Sundermann, T.; Dobrowolski, T.; Yates, C.; Jain, J.R.; Arróyave, R.; Karaman, I.; Elwany, A. Influence of layer thickness on the printability of nickel alloy 718: A systematic process optimization framework. Addit. Manuf. 2023, 73, 103646. [Google Scholar] [CrossRef]
  36. Baldi, N.; Giorgetti, A.; Palladino, M.; Giovannetti, I.; Arcidiacono, G.; Citti, P. Study on the Effect of Preheating Temperatures on Melt Pool Stability in Inconel 718 Components Processed by Laser Powder Bed Fusion. Metals 2023, 13, 1792. [Google Scholar] [CrossRef]
  37. Chen, Q.; Zhao, Y.; Strayer, S.; Zhao, Y.; Aoyagi, K.; Koizumi, Y.; Chiba, A.; Xiong, W.; To, A.C. Elucidating the effect of preheating temperature on melt pool morphology variation in Inconel 718 laser powder bed fusion via simulation and experiment. Addit. Manuf. 2021, 37, 101642. [Google Scholar] [CrossRef]
  38. Panahi, N.; Åsberg, M.; Oikonomou, C.; Krakhmalev, P. Effect of preheating temperature on the porosity and micro-structure of martensitic hot work tool steel manufactured with L-PBF. Procedia CIRP 2022, 111, 166–170. [Google Scholar] [CrossRef]
  39. Polozov, I.; Sufiiarov, V.; Kantyukov, A.; Razumov, N.; Goncharov, I.; Makhmutov, T.; Silin, A.; Kim, A.; Starikov, K.; Shamshurin, A.; et al. Microstructure, densification, and mechanical properties of titanium intermetallic alloy manufactured by laser powder bed fusion additive manufacturing with high-temperature preheating using gas atomized and mechanically alloyed plasma spheroidized powders. Addit. Manuf. 2020, 34, 101374. [Google Scholar] [CrossRef]
  40. Li, S.; Xiao, H.; Liu, K.; Xiao, W.; Li, Y.; Han, X.; Song, J.M.L. Melt-pool motion, temperature variation and dendritic morphology of Inconel 718 during pulsed- and continuous-wave laser additive manufacturing: A comparative study. Mater. Des. 2017, 119, 351–360. [Google Scholar] [CrossRef]
  41. Makona, N.W.; Yadroitsava, I.; Moller, H.; Yadroitsev, I. Characterization of 17-4PH single tracks produced at different parametric conditions towards increased productivity of LPBF systems—The effect of laser power and spot size upscaling. Metals 2018, 8, 475. [Google Scholar] [CrossRef]
  42. Guo, Y.; Jia, L.; Kong, B.; Wang, N.; Zhang, H. Single track and single layer formation in selective laser melting of niobium solid solution alloy. Chin. J. Aeronaut. 2018, 31, 860–866. [Google Scholar] [CrossRef]
  43. Shrestha, S.; Chou, K. Single track scanning experiment in laser powder bed fusion process. Procedia Manuf. 2018, 26, 857–864. [Google Scholar] [CrossRef]
  44. Balbaa, M.; Mekhiel, S.; Elbestawi, M.; McIsaac, J. On Selective laser melting of Inconel 718: Densification, surface roughness, and residual stresses. Mater. Des. 2020, 193, 108818. [Google Scholar] [CrossRef]
  45. Yadroitsava, I.; Els, J.; Booysen, G.; Yadroitsev, I. Peculiarities of single track formation from Ti6AL4V alloy at different laser power densities by selective laser melting. S. Afr. J. Ind. Eng. 2015, 26, 86–95. [Google Scholar] [CrossRef]
  46. Zheng, H.; Wang, Y.; Xie, Y.; Yang, S.; Hou, R.; Ge, Y.; Lang, L.; Gong, S.; Li, H. Observation of Vapor Plume Behavior and Process Stability at Single-Track and Multi-Track Levels in Laser Powder Bed Fusion Regime. Metals 2021, 11, 937. [Google Scholar] [CrossRef]
  47. Dong, Z.; Liu, Y.; Wen, W.; Ge, J.; Liang, J. Effect of Hatch Spacing on Melt Pool and As-built Quality During Selective Laser Melting of Stainless Steel: Modeling and Experimental Approaches. Materials 2019, 12, 50. [Google Scholar] [CrossRef]
  48. Caiazzo, F.; Alfieri, V.; Casalino, G. On the Relevance of Volumetric Energy Density in the Investigation of Inconel 718 Laser Powder Bed Fusion. Materials 2020, 13, 538. [Google Scholar] [CrossRef]
  49. Li, Y.; Založnik, M.; Zollinger, J.; Dembinski, L.; Mathieu, M. Effects of the powder, laser parameters and surface conditions on the molten pool formation in the selective laser melting of IN718. J. Mater. Process. Technol. 2021, 289, 116930. [Google Scholar] [CrossRef]
  50. Coen, V.; Goossens, L.; Van Hooreweder, B. Methodology and experimental validation of analytical melt pool models for laser powder bed fusion. J. Mater. Process. Technol. 2022, 304, 117547. [Google Scholar] [CrossRef]
  51. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
Figure 1. Specimen geometry with an indication of the single track position and cutting plane distance from the edge of the specimen.
Figure 1. Specimen geometry with an indication of the single track position and cutting plane distance from the edge of the specimen.
Applsci 14 00328 g001
Figure 2. Machine learning workflow in KNIME Analytics Platform.
Figure 2. Machine learning workflow in KNIME Analytics Platform.
Applsci 14 00328 g002
Figure 3. Comparison of the gradient boosted trees model prediction results (orange line) and the physical results for melt pool depth (blue line). The validation of the ML regression model is carried out on 20% of randomly selected data from the dataset, in this case using 79 randomly extracted physical observations.
Figure 3. Comparison of the gradient boosted trees model prediction results (orange line) and the physical results for melt pool depth (blue line). The validation of the ML regression model is carried out on 20% of randomly selected data from the dataset, in this case using 79 randomly extracted physical observations.
Applsci 14 00328 g003
Figure 4. Comparison of the gradient boosted trees model prediction results (orange line) and the melt pool width’s physical results (blue line). The validation of the ML regression model is carried out on 20% of randomly selected data from the dataset, in this case using 79 randomly extracted physical observations.
Figure 4. Comparison of the gradient boosted trees model prediction results (orange line) and the melt pool width’s physical results (blue line). The validation of the ML regression model is carried out on 20% of randomly selected data from the dataset, in this case using 79 randomly extracted physical observations.
Applsci 14 00328 g004
Table 1. Process parameters tested as a function of samples (layer thickness: 30 µm).
Table 1. Process parameters tested as a function of samples (layer thickness: 30 µm).
SampleScanning Speed (mm/s)Laser Power (W)Building Platform Temperature (°C)
1.1760190170
1.2960190170
1.31160190170
1.41360190170
1.51560190170
1.6760235170
1.7960235170
1.81160235170
1.91360235170
1.101560235170
1.11760280170
1.12960280170
1.131160280170
1.141360280170
1.151560280170
1.16760325170
1.17960325170
1.181160325170
1.191360325170
1.201560325170
1.21760370170
1.22960370170
1.231160370170
1.241360370170
1.251560370170
Table 2. Process parameters tested as a function of samples. The numeration 2.1 to 2.9 refers to the samples printed with a building platform temperature of 170 °C, while the numeration 2.10 to 2.18 refers to the samples printed with a building platform temperature of 80 °C (layer thickness: 60 µm).
Table 2. Process parameters tested as a function of samples. The numeration 2.1 to 2.9 refers to the samples printed with a building platform temperature of 170 °C, while the numeration 2.10 to 2.18 refers to the samples printed with a building platform temperature of 80 °C (layer thickness: 60 µm).
SampleScanning Speed (mm/s)Laser Power (W)Building Platform Temperature (°C)
2.1, 2.101000280170, 80
2.2, 2.11850319170, 80
2.3, 2.121150319170, 80
2.4, 2.13748375170, 80
2.5, 2.141000375170, 80
2.6, 2.151252375170, 80
2.7, 2.16850431170, 80
2.8, 2.171150431170, 80
2.9, 2.181000469170, 80
Table 3. Process parameters tested as a function of samples (layer thickness: 90 µm).
Table 3. Process parameters tested as a function of samples (layer thickness: 90 µm).
SampleScanning Speed (mm/s)Laser Power (W)Building Platform Temperature (°C)
3.1950310170
3.21100310170
3.31250310170
3.41400310170
3.51550310170
3.6950344170
3.71100344170
3.81250344170
3.91400344170
3.101550344170
3.11950378170
3.121100378170
3.131250378170
3.141400378170
3.151550378170
3.16950412170
3.171100412170
3.181250412170
3.191400412170
3.201550412170
3.21950446170
3.221100446170
3.231250446170
3.241400446170
3.251550446170
Table 4. Melt pool depth and width tracking table (layer thickness: 30 µm).
Table 4. Melt pool depth and width tracking table (layer thickness: 30 µm).
SampleScanning Speed (mm/s)Laser Power (W)Depth (µm)Width (µm)
Avg.SDAvg.SD
1.176019069.65.5152.45.2
1.296019041.87.4123.44.8
1.3116019029.46.3107.26.5
1.4136019015.24.183.615.0
1.5156019010.29.549.646.9
1.6760235122.25.2187.815.1
1.796023578.49.8135.49.2
1.8116023561.610.9116.46.1
1.9136023551.24.7108.02.4
1.10156023535.26.1100.86.9
1.11760280145.810.9200.04.8
1.12960280112.65.7171.84.1
1.13116028074.87.9133.05.6
1.14136028054.88.0108.61.7
1.15156028043.05.0111.28.7
1.16760325199.017.0196.015.2
1.17960325135.010.4182.49.4
1.181160325101.22.9156.59.0
1.19136032581.24.1133.69.3
1.20156032569.23.2126.87.9
1.21760370214.69.9207.815.2
1.22960370154.06.4200.88.5
1.231160370115.07.2162.86.4
1.24136037071.73.8132.011.2
1.25156037075.06.3130.09.5
Table 5. Melt pool depth and width tracking table (layer thickness: 60 µm). The numeration 2.1 to 2.9 refers to the samples printed with a building platform temperature of 170 °C, while the numeration 2.10 to 2.18 refers to the samples printed with a building platform temperature of 80 °C.
Table 5. Melt pool depth and width tracking table (layer thickness: 60 µm). The numeration 2.1 to 2.9 refers to the samples printed with a building platform temperature of 170 °C, while the numeration 2.10 to 2.18 refers to the samples printed with a building platform temperature of 80 °C.
SampleScanning Speed (mm/s)Laser Power (W)Depth (µm)Width (µm)
Avg.SDAvg.SD
2.1100028074.85.5142.15.7
2.2850319114.211.115212.1
2.3115031975.15.1141.07.3
2.4748375192.83.1166.012.2
2.51000375118.26.6151.76.8
2.6125237593.88.7151.67.1
2.7850431193.08.3167.314.9
2.81150431133.412.7160.25.8
2.91000469153.48.6177.77.4
2.10100028071.47.6142.64.1
2.11850319118.89.0153.45.3
2.12115031966.118.6128.412.0
2.13748375175.65.7165.28.1
2.141000375122.613.4155.55.3
2.15125237583.414.1138.712.8
2.16850431189.53.6165.89.1
2.171150431110.39.0146.34.1
2.181000469164.37.3165.012.4
Table 6. Melt pool depth and width tracking table (layer thickness: 90 µm).
Table 6. Melt pool depth and width tracking table (layer thickness: 90 µm).
SampleScanning Speed (mm/s)Laser Power (W)Depth (µm)Width (µm)
Avg.SDAvg.SD
3.195031089.03.6161.36.4
3.2110031068.07.9126.74.2
3.3125031045.39.0113.08.5
3.4140031023.09.596.313.6
3.515503109.77.657.749.4
3.6950344120.75.0160.79.7
3.7110034487.05.3145.79.1
3.8125034477.28.8120.76.0
3.9140034450.27.3111.311.6
3.10155034417.74.991.313.4
3.11950378134.710.1163.74.6
3.12110037897.09.0146.39.1
3.13125037888.05.6119.03.6
3.14140037857.78.5110.09.2
3.15155037848.32.5112.75.8
3.16950412153.714.6165.77.8
3.171100412110.319.6147.76.8
3.18125041298.717.8136.714.6
3.19140041266.013.0117.34.5
3.20155041250.015.4106.39.0
3.21950446159.219.3165.417.3
3.221100446115.311.1148.014.8
3.231250446102.617.9148.819.8
3.24140044684.712.6115.316.7
3.25155044673.019.5115.313.2
Table 9. Rosenthal model statistical evaluation in the investigated case studies.
Table 9. Rosenthal model statistical evaluation in the investigated case studies.
ModelR2MAE
Depth0.8237.71
Width0.54108.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baldi, N.; Giorgetti, A.; Polidoro, A.; Palladino, M.; Giovannetti, I.; Arcidiacono, G.; Citti, P. A Supervised Machine Learning Model for Regression to Predict Melt Pool Formation and Morphology in Laser Powder Bed Fusion. Appl. Sci. 2024, 14, 328. https://doi.org/10.3390/app14010328

AMA Style

Baldi N, Giorgetti A, Polidoro A, Palladino M, Giovannetti I, Arcidiacono G, Citti P. A Supervised Machine Learning Model for Regression to Predict Melt Pool Formation and Morphology in Laser Powder Bed Fusion. Applied Sciences. 2024; 14(1):328. https://doi.org/10.3390/app14010328

Chicago/Turabian Style

Baldi, Niccolò, Alessandro Giorgetti, Alessandro Polidoro, Marco Palladino, Iacopo Giovannetti, Gabriele Arcidiacono, and Paolo Citti. 2024. "A Supervised Machine Learning Model for Regression to Predict Melt Pool Formation and Morphology in Laser Powder Bed Fusion" Applied Sciences 14, no. 1: 328. https://doi.org/10.3390/app14010328

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop