Next Article in Journal
Implant-Supported Fixed Partial Dentures with Posterior Cantilevers: In Vitro Study of Mechanical Behavior
Previous Article in Journal
On Characterization of Shear Viscosity and Wall Slip for Concentrated Suspension Flows in Abrasive Flow Machining
Previous Article in Special Issue
Numerical Study of the Plastic Zone at the Crack Front in Cylindrical Aluminum Specimens Subjected to Tensile Loads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method for Predicting the Creep Rupture Life of Small-Sample Materials Based on Parametric Models and Machine Learning Models

College of Aerospace Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Materials 2023, 16(20), 6804; https://doi.org/10.3390/ma16206804
Submission received: 17 September 2023 / Revised: 12 October 2023 / Accepted: 13 October 2023 / Published: 22 October 2023

Abstract

:
In view of the differences in the applicability and prediction ability of different creep rupture life prediction models, we propose a creep rupture life prediction method in this paper. Various time–temperature parametric models, machine learning models, and a new method combining time–temperature parametric models with machine learning models are used to predict the creep rupture life of a small-sample material. The prediction accuracy of each model is quantitatively compared using model evaluation indicators (RMSE, MAPE, R2), and the output values of the most accurate model are used as the output values of the prediction method. The prediction method not only improves the applicability and accuracy of creep rupture life predictions but also quantifies the influence of each input variable on creep rupture life through the machine learning model. A new method is proposed in order to effectively take advantage of both advanced machine learning models and classical time–temperature parametric models. Parametric equations of creep rupture life, stress, and temperature are obtained using different time–temperature parametric models; then, creep rupture life data, obtained via equations under other temperature and stress conditions, are used to expand the training set data of different machine learning models. By expanding the data of different intervals, the problem of the low accuracy of the machine learning model for the small-sample material is solved.

1. Introduction

The creep behavior of materials is of great concern to engineers when designing and evaluating materials for use in high-stress or high-temperature environments [1,2,3,4]. The term ‘creep’ describes a phenomenon in which, under certain temperature and stress conditions, a material slowly undergoes plastic deformation over time [5]. When material is in a high-temperature environment, the creep phenomenon is more obvious. Unlike brittle fracture, creep does not occur suddenly under the action of stress; on the contrary, strain accumulates slowly under long-term stress action. With the continuous development of the material creep process, excessive plastic deformation will occur in material components, leading to the failure of and damage to components, and even serious accidents [6,7]. Creep fracture is one of the principal failure modes of turbine blades in high-temperature environments. Creep leads to excessive plastic deformation of the blade and causes fracturing [8,9]. When the reactor cooling system of a nuclear power plant is heated up and pressurized, the excessively high temperature and pressure may cause creep failure in some positions of the steam generator heat transfer tube, leading the heat transfer tube to rupture. Such occurrences lead to the leakage of radioactive substances from the containment vessel and cause serious accidents [10,11,12,13]. In general, the occurrence of creep is gradual, and its outcome is always destructive. The prediction of the creep rupture life of materials is a significant problem in the field of engineering safety, and it is urgent to improve the reliability and accuracy of the prediction of material creep rupture life.
The Larson–Miller model is a time–temperature parametric model based on data fitting. The Larson–Miller model is often used in engineering to predict the creep rupture life of materials [14,15,16,17,18,19,20,21,22,23]. Recently, some researchers used machine learning models to predict the creep rupture life of materials [24,25,26,27,28,29]. Both the time–temperature parametric model and the machine learning model possess unique advantages when predicting material creep rupture life.
Due to the Larson–Miller parametric model being simple, easy to use, and having high prediction accuracy, it has attracted the attention of many researchers. Some researchers have used the Larson–Miller parametric model to study the creep properties and high-temperature creep behavior of various alloys. Kim et al. [14] predicted the long-term creep life of Gr.91 steel using the Larson–Miller parametric model and carried out reliability assessments. Niu et al. [15] developed a model for predicting the creep failure time and failure probability of heat transfer tube materials in nuclear power plants based on the Larson–Miller parametric model in order to study the risk of accidents potentially arising from high-temperature creep and improve the tube material’s ability to deal with serious accidents. Loghman et al. [16] calculated the creep damage of a thick-walled reactor made of 316 austenitic stainless steel using the Larson–Miller parametric model and evaluated its remaining life. Lee et al. [17] combined the data analysis method with the Larson–Miller parametric model to predict the creep rupture life of 2.25 Cr and 9~12% Cr ferritic steels. Pavan et al. [18] evaluated the creep rupture life of nickel-based superalloys from superheater coils in supercritical power plants using the Larson–Miller parametric model. Render et al. [19] predicted the creep rupture life of Inconel 740 alloy via the use of the Larson–Miller parametric model. Shi et al. [20] verified the high accuracy of the Larson–Miller parametric model in predicting the creep rupture life of various superalloys, including superalloys DD6, CMSX-4, CMSX-2, SC7-14-6, and Alloy-454. Cedro et al. [21] extrapolated the creep rupture life of Incoloy 800 alloy and 304H stainless steels using the Larson–Miller parametric model. Based on the experimental results of creep, Huang et al. [22] extrapolated creep rupture stress corresponding to the 100,000 h creep life of martensitic heat-resistant steel using the Larson–Miller parametric model, Monkman–Grant method, Norton power law, and creep damage tolerance. Sourabh et al. [23] predicted the creep rupture life of nickel-based 690 superalloys using the Larson–Miller parametric model. The authors further studied the high-temperature creep behavior of nickel-based 690 superalloys in a temperature ranging from 800 °C to 1000 °C.
Some researchers have tried to predict the creep rupture life of some alloys with the help of machine learning models, finding that some machine learning models have high prediction accuracy in terms of life prediction. Zhang et al. [24] predicted the creep fracture life of 316 austenitic stainless steel using machine learning models (Gaussian process regression model, random forest model, support vector machine model, and shallow neural network model) and a deep learning model (deep neural network model). Wang et al. [25] converted the creep data of Cr-Mo steel into Larson–Miller parameters and other time–temperature parameters, and then predicted the creep rupture life of Cr-Mo steel using different models: the linear regression model, random gradient descent model, multi-layer perceptron model, and support vector machine model. Tan et al. [26] proposed an integrated model coupled with Larson–Miller parameters and predicted a creep rupture life of 9% Cr martensitic heat-resistant steel through individual machine learning models (linear regression, support vector machine, and artificial neural network models) and integrated learning models, evaluating the prediction accuracy of each model. He et al. [27] predicted the creep fracture behavior of austenitic heat-resistant steel Sanicro 25 using a soft-constrained machine learning model. Xiang et al. [28] predicted the creep rupture life of Fe-Cr-Ni heat-resistant alloy using a deep learning model. Zhu et al. [29] predicted the properties of GH4169D alloy via comparison with GH4169 alloy. Further, the authors predicted the high-temperature creep rupture life using the low-temperature creep rupture life of GH4169 and GH4169D alloys. The prediction accuracy was higher than 90%.
In addition to predicting the creep rupture life of alloys via machine learning models, the researchers also used such methods to study the effects of factors related to the creep properties of alloys. Liu et al. [30] developed a divide-and-conquer self-adaptive (DCSA) machine learning model to take into account not only alloy composition, test temperature, and test stress, but also the microscopic structural parameters related to the creep process, e.g., layer-fault energy, lattice parameters, and diffusion coefficient. They predicted the creep rupture life of Ni-based single-crystal superalloys and investigated the effect of microstructure on the creep properties of Ni-based single-crystal superalloys. Kong et al. [31] optimized the machine learning model using a genetic algorithm. These authors then predicted the creep rupture life of 9% Cr alloy and studied the relationship between the composition and creep properties of 9% Cr alloy. Han et al. [32] predicted the creep rupture life of nickel-based single-crystal superalloys using machine learning models and studied the effects of different alloy elements on the creep life of nickel-based single-crystal superalloys. Khatavkar et al. [33] developed a large database of nickel-based superalloys, predicted the ultimate tensile strength, yield strength, and creep fracture life of nickel-based superalloys through machine learning models, and quantified the contribution of various characteristics to model prediction results through SHAP (Shapley additive explanations) values. Feng et al. [34] predicted the creep performance of recycled aggregate concrete through two types of models (individual machine learning and ensemble learning). Feng analyzed the importance of recycled aggregate concrete and studied the effects of different input variables on its creep performance based on the extreme gradient boosting model. Wang et al. [35] combined machine learning models with genetic algorithms to predict the creep rupture life of low-alloy steel, studying the effects of alloy composition and processing parameters on the creep properties of low-alloy steels for the design and development of new alloys.
These studies confirm the feasibility of using various machine learning models to predict the creep rupture life of certain materials. However, the results of the researchers’ predictions have not yet been compared with the prediction results of classical parametric models, which in fact on occasion have high accuracy in predicting the creep rupture life of materials. In this paper, the prediction accuracy of the Larson–Miller, Mason–Succop, Ge–Dorn, and Manson–Haferd parametric models and several common machine learning models are compared. The prediction ability of each model is evaluated quantitatively using three model evaluation indicators, and model selection is carried out to improve the accuracy of material creep rupture life prediction.
By converting the creep test data for fitting-related application to the P lg σ coordinate system, the time–temperature parametric model can always obtain a curve prediction function with a high fitting degree in relation to the test data. The machine learning models can train creep test data through different algorithm theories, predict the creep rupture life under different conditions, and quantify the influence of different input variables on the output for comparison. Due to the different theories of various models in the two methods and their different applicability to varied materials, the prediction ability of each model for different types of creep data is always varied. The current research always focuses on using one of the two methods in prediction in order to find the model with the strongest prediction ability and apply it to the prediction of material creep rupture life. Researchers do not account for the different applicability of each model to a variety of materials. A model with a strong prediction ability for one material may not have similar utility when making forecasts about other materials. It is not guaranteed that a certain model always has the strongest prediction ability for a variety of materials.
In this paper, we propose a creep rupture life prediction method. Different models are used to predict the creep rupture life of materials, including several classical time–temperature parametric models and various machine learning models. Then, the prediction accuracy of each model is quantitatively compared using model evaluation indicators (RMSE, MAPE, R2), and the predicted result of the model with the strongest prediction accuracy is the output. This prediction method not only improves the applicability and accuracy of creep rupture life prediction but also quantifies the influence of each input variable on creep performance through the machine learning model.
The most common problem faced when predicting material creep rupture life with small-sample data via the use of machine learning models is the low prediction accuracy of machine learning models due to insufficient data in the training set. The amount of data in the training set is a decisive factor affecting the prediction accuracy of a machine learning model. The question of how to extend creep data reasonably and improve the prediction accuracy of material creep rupture life is an urgent problem in need of resolution. In this paper, a new method is proposed that combines the classical time–temperature parametric models with advanced machine learning models and gives full play to the advantages of the two methods. The parametric equation of creep life, stress, and temperature is obtained using different time–temperature parametric models, and then the creep life data of other conditions predicted via the equation are used to expand the training set data of different machine learning models. Through this method, the advanced machine learning model is combined with the classical time–temperature parametric model. This not only solves the problem that the machine learning model is difficult to use on small samples but also improves the prediction accuracy of the machine learning model.

2. Three Categories of Models Used in the Prediction Method

2.1. Time–Temperature Parametric Models

2.1.1. Larson–Miller Parametric Model

In 1952, Larson and Miller [36] found that, under a certain level of stress, the logarithmic creep fracture time l g t of the material tends to be linear with the inverse of temperature 1 / T . Based on this law, they proposed the Larson–Miller parametric model, in order to convert temperature T and logarithmic fracture time lg t into comprehensive parameter P . The comprehensive parameter P is composed of temperature T , fracture time t , and fitting parameter c L M . In this way, the creep test data at different temperatures and stress conditions are converted into a series of points in a two-dimensional right-angle coordinate system ( P lg σ ), and a cubic function curve can be obtained on the basis of fitting this series of points. Once the fitted curve is obtained, it can be used to predict the creep fracture time of the material under other temperature and stress conditions. The mathematical equations of the L-M model are as follows:
P ( σ ) = T ( c L M + l g t )
lg σ = a 0 + a 1 P ( σ ) + a 2 P 2 ( σ ) + a 3 P 3 ( σ )
where t is the fracture time (h), T is the temperature (K), c L M is a constant determined by creep test data, and P ( σ ) is a function of the stress σ . When the stress σ is certain, P ( σ ) is a definite value and the relationship between 1 / T and l g t is linear. When a linear function passes through the fixed point (0, c L M ), the value of the constant c L M can be obtained by solving for the intercept of the linear function. The relationship between l g t and 1 / T is depicted in Figure 1.

2.1.2. Manson–Succop Parametric Model

Manson and Succop [37] found that, under certain stress conditions, the logarithmic creep fracture time lg t of the material tends to be linear with the temperature T . Based on this observation, they proposed the Manson–Succop parametric model, which converts temperature T and logarithmic fracture time lg t into the parameter P . Similar to the L-M model, the M-S model converts the creep test data at different temperatures and stress conditions into a series of points in a two-dimensional right-angle coordinate system ( P lg σ ). A curve can be obtained by fitting this series of points, which is a cubic function. This curve is then used to predict the creep fracture time of the material under other temperature and stress conditions. The mathematical equations of the M-S model are as follows:
P ( σ ) = lg t c M S T
l g σ = a 0 + a 1 P ( σ ) + a 2 P 2 ( σ ) + a 3 P 3 ( σ )
where t is the fracture time (h), T is the temperature (K), c M S is a constant determined by creep test data, and P σ is a function of the stress σ . When the stress is certain, P σ is a definite value and the relationship between T and lg t is linear. The slope of linear functions under different stress conditions is represented by c M S . Therefore, the constant c M S can be obtained by solving for the slope of a linear function. The relationship between lg t and T is shown in Figure 2.

2.1.3. Ge–Dorn Parametric Model

The Ge–Dorn parametric model asserted that [38,39], under a certain level of stress, 1 / T and l g t are linearly related, and the slope of linear functions under different stresses is c G D . Therefore, the constant c G D can be obtained by solving for the slope of a linear function. The mathematical equations of the G-D model are as follows:
P ( σ ) = lg t c G D / T
lg σ = a 0 + a 1 P ( σ ) + a 2 P 2 ( σ ) + a 3 P 3 ( σ )
where t is the fracture time (h), T is the temperature (K), c G D is a constant determined by creep test data, and P σ is a function of the stress σ . When the stress is certain, P σ is a definite value and the relationship between 1 / T and lg t is linear. The slope of the linear functions under different stress conditions is c G D . Therefore, the constant c G D can be obtained by solving for the slope of a linear function.
The relationship between l g t and 1 / T is shown in Figure 3.

2.1.4. Manson–Haferd Parametric Model

Manson and Haferd [40] found that, under a certain level of stress, there is a linear relationship between T and l g t . Their research also revealed that the linear function passes through the fixed point ( T 0 , l g t 0 ) . Based on this law, they proposed the Manson–Haferd parametric model, which converts temperature T and fracture time l g t into parameter P through their proposed equation. The mathematical equations of M-H model are as follows:
P ( σ ) = l g t l g t 0 / T T 0
lg σ = a 0 + a 1 P ( σ ) + a 2 P 2 ( σ ) + a 3 P 3 ( σ )
where t is the fracture time (h), T is the temperature (K), T 0 and l g t 0 are constants determined by creep test data, and P σ is a function of the stress σ . When the stress is certain, P σ has a definite value and the relationship between T T 0 and l g t l g t 0 is linear. The values of two constants, namely, l g t 0 and T 0 , can be obtained by finding the coordinates of the intersection of linear functions under different stress conditions. The relationship between l g t and T is shown in Figure 4.

2.2. Machine Learning Models

Some researchers have applied different machine learning models to the task of predicting the creep rupture life of certain materials, finding that some models possess strong prediction capacities [41,42,43]. Considering the different algorithm theories and applicability of different machine learning models, this paper adopts several common machine learning models to predict the creep rupture life of materials, comparing the results with those of other methods.
The following describes the basic prediction principles and limitations of the machine learning models used in this paper. The input variables of each machine learning model are the mass fraction of different elements, the test temperature T , and the test stress σ . Additionally, the output variable is the logarithmic creep rupture time lg t of the material. By far the largest difference between machine learning models is that they train the input data using different algorithmic theories.

2.2.1. Back-Propagation Neural Network Based on Particle Swam Optimization (PSO-BPNN) [44,45]

The PSO-BPNN model uses a particle swarm optimization algorithm to optimize a back-propagation neural network (BPNN), adjust the weight and bias of the neural network, improve its training efficiency and accuracy, assist it in producing the optimal local solution and improve its global search ability. By constantly adjusting the neural network, the model can learn the complex relationship between data and obtain accurate prediction results.
The PSO-BPNN model is sensitive to the selection and preprocessing of input features. Indeed, inappropriate feature selection will lead to the underfitting or overfitting of the model, which in turn will affect the generalization ability of the model. Since the PSO-BPNN model involves the training of a BP model and the iterative optimization searching of a PSO algorithm, it requires a two-stage training process. Indeed, the training time required is longer than that needed for a BPNN-only method. Some parameters must be set manually in the PSO-BPNN model, including the number of particles, the number of iterations, the learning rate, etc. For different problems, some parameters must be adjusted in order to adapt the model to situational specifics.

2.2.2. Back-Propagation Neural Network Based on Genetic Algorithms (GA-BPNN) [46,47,48]

The GA-BPNN model combines the genetic algorithm and the back-propagation neural network (BPNN) and uses the genetic algorithm to optimize the weight and threshold of the back-propagation neural network, overcoming the problem that back-propagation neural networks easily fall into the local optimal solution.
The GA-BPNN model is endowed with good application effect when applied to nonlinear problems and used to solve high-dimensional features. It can make full use of the global search characteristics of genetic algorithms and the prediction ability of back-propagation neural networks to achieve accurate prediction and strong generalization ability.
The GA-BPNN model has poor interpretability due to its complex network structure and the randomness of genetic algorithms. Similar to the limitations of the PSO-BPNN model, the GA-BPNN model involves genetic algorithm optimization and neural network model training, which takes a long time for large-scale data sets. The GA-BPNN model also has some parameters that need to be set manually, including population size, evolutionary algebra, crossover rate, mutation rate, etc.

2.2.3. Radial Basis Function Neural Network (RBFNN) [49,50,51]

The RBFNN model is a three-layer neural network model composed of an input layer, a nonlinear hidden layer, and a linear output layer. The RBFNN model is characterized by local sensing ability and global approximation ability. By mapping the input sample to the hidden layer neuron, it uses the radial basis function to measure the similarity of the input sample, transforming it into a high-dimensional feature space for modeling.
The RBFNN model calculates the final output result according to the output of the hidden layer and the corresponding weight value. The RBFNN model has a strong generalization ability and fast convergence ability and is widely used in various fields.
Although the RBFNN model has a strong nonlinear mapping ability, its parameters are often difficult to explain. Furthermore, it is difficult to explain and understand the relationship between each hidden layer and its role.

2.2.4. Random Forest (RF) [52,53,54]

An RF model is based on decision trees that use self-aggregation and random feature selection to reduce overfitting risk and improve model performance. An RF model averages or votes the predicted results of all decision trees to obtain the final predicted results. For the regression problem, the average value of a set of training samples is saved on the leaf nodes of each decision tree. When making a prediction, each decision tree provides a prediction result, and the final prediction result is the average of the predictions of all decision trees.
RF models can effectively reduce variance and overfitting risk by integrating prediction results from multiple decision trees. Since each decision tree is built independently, the model has strong noise resistance. RF models can also provide feature importance assessments to help analyze the degree to which a feature contributes. The model has high flexibility and robustness in practical application, being suitable for application to various data types and problems.
Although an RF model uses bootstrapping and the random selection of features to reduce overfitting, the model may still overfit if the sample size is too small or the correlation between features is too high. Because RF models comprise multiple decision trees, each trained on a set of randomly selected features, understanding and interpreting the entire model becomes a relatively complex task.

2.2.5. Support Vector Regression (SVR) [55,56]

An SVR model maps the training data onto a high-dimensional feature space and searches for a hyperplane in the high-dimensional feature space. When the eigenvalue of a new sample is given, the model maps the sample onto a high-dimensional space and predicts according to the position of the sample on the hyperplane.
An SVR model controls the complexity of the model by introducing a penalty term, thus avoiding the problem of overfitting. SVR models can deal with linear and nonlinear regression problems and adapt to different data features by selecting different kernel functions. SVR models can effectively process high-dimensional data and sample noise. Additionally, they are endowed with strong robustness.
For complex problems with multiple variables, SVR models may not be able to effectively capture the characteristics of and relationships between the data. Indeed, when the data set is large or there are many input features, SVR models require a long period of training. In addition, SVR models are sensitive to noise. As such, in a high-noise environment, SVR models may not achieve such a good performance.

2.2.6. Deep Neural Network (DNN) [57]

A DNN model is a nonlinear model that can adapt to complex data features and relationships. The model consists of multiple hidden layers, each containing multiple neurons. Each neuron is connected to all neurons in the previous layer, and an activation function is applied to each neuron. The activation function plays a role in weighting the input information and nonlinear transformation in the neural network.
A DNN model maps the input data to the corresponding output according to the combination of weight and bias and the action of the activation function. Through back-propagation and a process of updating parameters during the training process, parameters are optimized in order to improve the accuracy of the prediction results.
Due to the DNN model’s strong fitting ability, if the training data are insufficient or the training set and test set do not match, the model may overfit the training data, resulting in a poor performance in its application to the test set. DNN models are sensitive to noise and outliers in the data and are easily disturbed by them.

2.2.7. Gauss Process Regression (GPR) [58,59]

A GPR model is a non-parametric Bayesian model designed for regression problems. The prediction principle of the model is to build a Gaussian process model for the target variables in the training data and to use the model to predict new input data.
The model assumes that the target variables obey a multivariate Gaussian distribution, obtaining similarity information between the target variables via the calculation of the covariance matrix between the training data.
When a new input sample is available, the distribution of the predicted values is inferred by calculating the covariance between that sample and the training data. The entire prediction process of the model is based on the principle of Bayesian inference. By optimizing the hyperparameters of the model, it may be adjusted adaptively to better fit the data and predict the target variables of unknown samples.
The computational complexity of a GPR model increases rapidly with the increase in data scale. Because a GPR model is used to calculate the inverse matrix of the covariance matrix, the calculation, and storage of the covariance matrix become more difficult with the increase in data dimension, and the sampling and interpolation of high-dimensional data also requires more computing resources, which may not meet the efficiency requirements.

2.2.8. Deep Belief Network (DBN) [60,61]

A DBN model is a deep learning model that predicts by stacking multiple RBM (restricted Boltzmann machines) in order to construct a multi-layer neural network. During the training phase, a DBN model is built layer by layer through an approach centered around pre-training and fine-tuning. In the pre-training process, each layer’s RBM learns the distribution characteristics of the data. Then, the learned weights are used as inputs for the next layer’s RBM, gradually extracting features from higher-level representations.
In the fine-tuning phase, the entire network is adjusted using a back-propagation algorithm in order to minimize the prediction error on the training data. Through this hierarchical training approach, the DBN model can learn more abstract features at higher levels and has a strong non-linear modeling capability.
In a DBN model, when a gradient update is carried out using the back-propagation algorithm, the problem of gradient disappearance may occur. This leads to an unstable training process and makes the network unable to converge or difficult to optimize. When a DBN model has too many layers, it is easy for the gradient to disappear. In the process of back-propagation, the gradient will gradually become smaller with the increase in the number of layers. When the number of layers is too large, the gradient may become very small, making the network unable to update effectively.

2.3. A New Method of Predicting the Creep Rupture Life of Materials

2.3.1. A Method Combined with the Parametric Models and the Machine Learning Models

For any material, new or old, there must be one similar, with roughly the same type of elements but a different content of elements. Although the chemical formulae of the two materials differ, the creep rupture life data of the two materials can be fused via machine learning models because the two materials share the same variables, such as temperature, stress, chemical elements, etc. However, this method has a serious shortcoming: due to the long time and high cost of performing high-temperature creep tests, very limited creep data are obtained through the test. As a result, the distribution of creep data in various data intervals is often unbalanced, and most of the data are concentrated in a certain interval. Although the sample size of a training set is expanded via the introduction of data from similar materials, the prediction accuracy of machine learning models is still not high enough.
In order to solve this problem, we propose a new prediction method that combines the classical time–temperature parametric models with advanced machine learning models and gives full play to the advantages of the two categories of methods. The specific idea is to reasonably expand the data in various intervals of the machine learning model training set using the time–temperature parametric model in order to balance the distribution of the data set in various intervals and further improve the applicability and prediction accuracy of machine learning models.
The four types of time–temperature parametric models used in the new method are the L-M model, M-S model, G-D model, and M-H model. The L-M, M-S, and G-D models combine temperature T with logarithmic creep fracture time lg t by applying constant c to equations related to logarithmic stress lg σ . The M-H model combines temperature T with the logarithmic creep fracture time lg t using two constants, T 0 and l g t 0 , in order to form an equation related to logarithmic stress lg σ . The equations of the L-M, M-S, G-D, and M-H models are shown in Equations (9)–(12), respectively.
L-M:
lg σ = a 0 + a 1 T c L M + lg t + a 2 T 2 c L M + lg t 2 + a 3 T 3 c L M + lg t 3  
M-S:
lg σ = a 0 + a 1 lg t c M S T + a 2 lg t c M S T 2 + a 3 lg t c M S T 3  
G-D:
lg σ = a 0 + a 1 lg t c G D T + a 2 lg t c G D T 2 + a 3 lg t c G D T 3
M-H:
lg σ = a 0 + a 1 l g t l g t 0 T T 0 + a 2 l g t l g t 0 T T 0 2 + a 3 l g t l g t 0 T T 0 3
Equations (13)–(16) can be obtained by expanding Equations (9)–(12). We found from Equations (13)–(15) that, for L-M, M-S, and G-D models, if the values of the fitting coefficients a 3 , a 2 , a 1 , and a 0 and the value of constant c are all determined, it is possible to obtain equations for logarithmic stress lg σ , logarithmic creep fracture time lg t and temperature T . It can be found from Equation (16) that for the M-H model, if the values of the fitting coefficients a 3 , a 2 , a 1 , a 0 and the values of constants T 0 and l g t 0 are all determined, a cubic equation about logarithmic stress lg σ , logarithmic creep fracture time lg t and temperature T can be obtained. The equations of four models can be unified as Equation (17). For Equation (17), once the values of the logarithmic stress lg σ and temperature T are determined, the cubic equation of the logarithmic creep fracture time lg t can be obtained. By solving the roots of the cubic equation, the logarithmic creep fracture time lg t can be obtained. In this way, the creep fracture time values under other temperature and stress conditions can be predicted using known creep test data.
Four classical time–temperature parametric models are used to obtain parametric equations of temperature T , stress σ , and creep life t . Then, the creep fracture time data of other conditions obtained from parametric equations are expanded into the training set of machine learning models to realize the combination of the time–temperature parametric models and the machine learning models.
L-M:
a 3 T 3 lg t 3 + 3 a 3 c L M T 3 lg t 2 + a 2 T 2 lg t 2 + 3 a 3 c L M 2 T 3 lg t + 2 a 2 c L M T 2 lg t + a 3 c L M 3 T 3 + a 2 c L M 2 T 2 + a 1 T lg t + a 1 c L M T + a 0 lg σ = 0
M-S:
a 3 lg t 3 3 a 3 c M S T lg t 2 + a 2 lg t 2 + 3 a 3 c M S 2 T 2 lg t 2 a 2 c M S T lg t + a 1 lg t a 3 c M S 3 T 3 + a 2 c M S 2 T 2 a 1 c M S T + a 0 lg σ = 0
G-D:
a 3 lg t 3 3 a 3 c G D T lg t 2 + a 2 lg t 2 + 3 a 3 c G D 2 T 2 lg t 2 a 2 c G D T lg t + a 1 lg t a 3 c G D 3 T 3 + a 2 c G D 2 T 2 a 1 c G D T + a 0 lg σ = 0
M-H:
a 3 T T 0 3 lg t 3 3 a 3 l g t 0 T T 0 3 lg t 2 + a 2 T T 0 2 lg t 2 2 a 2 l g t 0 T T 0 2 lg t + 3 a 3 l g t 0 2 T T 0 3 lg t + a 1 T T 0 lg t a 3 T T 0 3 l g t 0 3 + a 2 T T 0 2 l g t 0 2 a 1 T T 0 l g t 0 + a 0 lg σ = 0
f lg σ , T T 0 , lg t l g t 0 = 0 , L M , M S , G D : T 0 = 0   a n d   l g t 0 = 0
For the new method of time–temperature parametric model + machine learning model, the basis for predicting the new method remains the machine learning model, and the new method connects different time–temperature parametric models with the machine learning model via parametric data expansion. Compared with the training data set of a single machine learning model, the training data set of the new method possesses an additional data type, namely, the creep data expanded using different time–temperature parametric models (L-M, M-S, G-D, M-H). As shown in Figure 5, in this new method, the training set data of machine learning models consists of three parts. The first part comprises the small-sample creep test data of the material. The second part consists of the creep test data of a material similar to the small-sample material. The third part is the data predicted by parametric equations obtained using four types of time–temperature parametric models. Then, these three types of data are used as the training data sets of machine learning models. Compared with the machine learning method or the time–temperature parametric method, the new approach proposed in this paper combines the time–temperature parametric method with machine learning, giving full play to the advantages of both methods.
The problem caused by the excessive extension of the third part of the data to the training set of machine learning models is that, with the increasing proportion of the data obtained from parametric equations in the training data, the training results of the machine learning model are constantly close to the predicted results of the time–temperature parametric curve.
As a result, machine learning models do not take advantage of the fact that they can train data from different materials together in order to increase the amount of data and more accurately find the relationship between inputs and output.
The time–temperature parametric model fits the small sample data through different theories and establishes the relationship between the temperature, stress, and creep rupture life of the material. The method we proposed involves introducing this relationship into the training set of the machine learning model. In addition to expanding the sample size of the training set by introducing similar material to help the machine learning model find the relationship between the temperature, stress, and creep rupture life of the small-sample material, the parametric model data extension method also provides more data for the machine learning model. This helps the machine learning model to establish the relationship between temperature, stress, and the life of small-sample materials more accurately.

2.3.2. A New Prediction Method of Creep Rupture Life

Due to the different theories of four time–temperature parametric models, the prediction results differ, even for the same set of creep test data. As such, it is not guaranteed that one parametric model will always have the strongest prediction ability. Therefore, four different time–temperature parametric models are all used to improve the prediction accuracy and applicability of creep rupture life prediction.
When the sample size of the material is small, it is difficult to use machine learning models. Thus, it is necessary to add creep data for a material similar to small-sample material into the training set data of the machine learning models, and then use different machine learning models for training and prediction. Commonly used machine learning models include the random forest model, Gaussian process regression model, support vector regression model, and various neural network models (deep neural network model, deep belief network model, radial basis function, neural network model, etc.). Considering the differences in algorithmic theory between these machine learning models, the prediction results obtained after training with the same data are different. Therefore, the creep rupture life prediction method proposed in this paper simultaneously uses different machine learning models to obtain prediction results. Then, quantitative indicators (RMSE, R2, and MAPE) are calculated using predicted values and experimental values. As shown in Figure 6, the creep rupture life prediction method proposed in this paper utilizes different machine learning models to predict the creep rupture life of small-sample material. The prediction accuracy of each model is evaluated by comparing their quantitative indicators (RMSE, R2, and MAPE), and the output values of the model with the highest prediction accuracy are selected as the final output.

2.4. Indicators for Model Evaluation

(1)
Root-Mean-Square Error
The root-mean-square error is the standard deviation of residuals. RMSE quantifies the degree of residual dispersion, revealing how tightly experimental values cluster around predicted values. This measures the deviation of predicted values from true values. The mathematical equation for RMSE is as follows:
R M S E = 1 N i = 0 N 1 y i y ^ i 2
where y i is the experimental values, y ^ i is the predicted values, and N is the number of experimental data.
(2)
Mean Absolute Percentage Error
The mean absolute percentage error is an indicator used to measure the prediction accuracy of the model, reflecting the percentage difference between predicted values and experimental values. The smaller the MAPE, the higher the prediction accuracy of the model will be. The mathematical equation for MAPE is as follows:
M A P E = 1 N i = 0 N 1 y i y ^ i y i
where y i is the experimental values, y ^ i is the predicted values, and N is the number of experimental data.
(3)
Coefficient of determination
The coefficient of determination R 2 is used to characterize a good or bad fit based on the variation in data. From Equation (20), we know that the normal range of R 2 is [0, 1]. The closer the value of R 2 is to 1, the better the fitting effect of the model will be. The mathematical equation for R 2 is as follows:
R 2 = 1 i = 0 N 1 y i y ^ i 2 i = 0 N 1 y i y ¯ 2
where y i is the experimental values, y ^ i is the predicted values, y ¯ is the mean of the experimental values, and N is the number of experimental data.

3. Results and Discussion

3.1. Establishment of Data Sets for Model Fitting and Training

We used 21 sets of creep test data from 5Cr-0.5Mo alloy standard plate specimens in the database of the National Institute of Materials Science as small-sample data, and the alloy number of 21 sets of data in the database was STBA25 [62]. Small-sample data were divided according to a ratio of approximately 7:3, in which 15 sets of creep test data were used to fit parametric curves of 4 different time–temperature parametric models and as training sets in machine learning models. The other 6 sets of creep test data were used as test sets in order to examine the prediction accuracy of each model. In total, 220 sets of creep test data of 1Cr-0.5Mo alloy standard plate specimens in the database were used as extended data of the training set of machine learning models. The alloy number of 220 sets of data in the database was SCMV2NT [62].
Partial creep test data of the machine learning training set are shown in Table 1. The training set of the machine learning model comprised 13 input variables and 1 output variable. The input variables were the mass fraction of different elements (C, Si, Mn, P, S, Ni, Cr, Mo, Cu, Al, N), test temperature T , and test stress σ , and the output variable was the material logarithmic creep fracture time lg t .

3.2. Model Prediction Results

3.2.1. Prediction Results of Each Model in the New Method

Three categories of methods were used to predict the creep rupture life of 5Cr-0.5Mo alloy. The first category of methods involved utilizing four kinds of time–temperature parametric models to fit 15 sets of creep test data from 5Cr-0.5Mo alloy through the L-M, M-S, G-D, and M-H models, respectively, and obtain four fitting curve functions for use in prediction. The fitting curves obtained using four kinds of parametric models are shown in Figure 7a–d. The values of coefficients and goodness of fitting curve functions are shown in Table 2.
The second category of methods uses different machine learning models for training and predicting, and the training set consists of two parts: 15 sets of creep data of 5Cr-0.5Mo alloy and 220 sets of creep data of 1Cr-0.5Mo alloy.
The third category of methods combines the parametric models with the machine learning models, and the training set of machine learning models includes three parts of data: The first part is 15 sets of creep test data of 5Cr-0.5Mo alloy; the second part is the creep data predicted by the parametric equations obtained by fitting 15 stets of creep test data of 5Cr-0.5Mo alloy with different time–temperature parametric models. As shown in Figure 8, the creep data of various creep life intervals of 5Cr-0.5Mo alloy before expansion are 7 (<1000 h), 3 (1000~3000 h), 2 (3000~10,000 h), and 3 (10,000~30,000 h), respectively. After expansion, the creep data amount of each creep life interval is expanded to 50; the third part of the training set consists of 220 sets of creep test data of 1Cr-0.5Mo alloy.
The prediction results of each model are shown in Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16, which show the actual test values and predicted values of three categories of methods. Three categories of methods in question include four time–temperature parametric models (L-M, M-S, G-D, M-H), eight machine learning models (PSO-BPNN, GA-BPNN, RBFNN, RF, SVR, DNN, GPR, DNN), and composite models ((L-M/M-S/G-D/M-H) + (PSO-BPNN/GA-BPNN/RBFNN/RF/SVR/DNN/GPR/DBN)).
Figure 9 shows the actual test values and the predicted values of three categories of methods (L-M model/machine learning models/L-M+ machine learning models). The L-M model uses 15 sets of creep test data of 5Cr-0.5Mo, the machine learning model uses creep test data of 5Cr-0.5Mo, and the creep test data of another material, 1Cr-0.5Mo. L-M+ machine learning models use three parts of data, namely, small-sample creep test data of 5Cr-0.5Mo, creep test data of 1Cr-0.5Mo, and creep data expanded using the L-M parametric model. The actual values shown in Figure 9 are another 6 sets of creep test data in 21 sets of 5Cr-0.5Mo.
After the predicted values of each model are obtained, three quantitative evaluation indicators are calculated for each model through the experimental values and predicted values. Figure 10 shows the values of three evaluation indicators of each model, namely, RMSE, MAPE, and R2. Figure 9 and Figure 10 show that, compared with the machine learning models, the model prediction results obtained by combining the L-M parametric model with various machine learning models are more accurate. Further, their predicted values are closer to those of actual test values.
The results show that the L-M parametric model helps the training process of the machine learning model to find the relationship between the input and output of the small-sample material more accurately. The combination of the L-M parametric model and the machine learning model improves the prediction accuracy of material creep rupture life.
Figure 11 shows the actual test values and predicted values of three categories of methods (M-S model/machine learning models/composite models). Figure 12 shows the values of three evaluation indicators of each model—RMSE, MAPE, and R2. Limited by the computational domain of the cubic fitting function of the M-S model, the M-S model cannot predict one test condition among the six test conditions in the set. The results reported in Figure 11 and Figure 12 show that, compared with the machine learning models, the model prediction results obtained by combining the M-S parametric model with some machine learning models are more accurate than others and their predicted values are closer to those of actual test values.
Figure 13 shows the actual test values and the predicted values of three categories of methods (G-D model/machine learning models/composite models). Figure 14 shows the values of three evaluation indicators of each model—RMSE, MAPE, and R2. Limited by the computational domain of the cubic fitting function of the G-D model, there is a test condition that the G-D model cannot predict among the six test conditions in the set. Figure 13 and Figure 14 show that, compared with the machine learning models, the model prediction results obtained by combining the G-D parametric model with most machine learning models become more accurate and their predicted values grow closer to those of the actual test values.
Figure 15 shows the actual test values and predicted values of three categories of methods (M-H model/machine learning models/composite models). Figure 16 shows the values of three evaluation indicators of each model—RMSE, MAPE, and R2. Figure 15 and Figure 16 show that, compared with the machine learning models, the accuracy of the model prediction results obtained by combining the M-H parametric model with various machine learning models is not significantly improved, which is caused by the low prediction accuracy of M-H parametric model for this set of creep data.
In summary, the creep rupture life of a small-sample material 5Cr-0.5Mo alloy can be predicted using time–temperature parametric models, machine learning models, and a new method combining time–temperature parametric models with machine learning models. The prediction results are compared, and the prediction accuracy of each model is quantified by three quantitative indicators (RMSE, MAPE, R2).
The results show that the prediction accuracy of 5Cr-0.5Mo alloy is improved by combining L-M, M-S, and G-D parametric models with various machine learning models. However, due to the low prediction accuracy of the M-H parametric model for this set of creep data, the prediction effect of combining machine learning models with an M-H parametric model is poor.
On the basis of the above phenomena, it can be seen that when a time–temperature parametric model is combined with machine learning models, the parametric model with the best prediction accuracy should be selected from the four time–temperature parametric models (L-M model, M-S model, G-D model, M-H model) in order to achieve a better combination effect.

3.2.2. Comparison of Model Prediction Accuracy

We calculated the values of evaluation indicators for each model in the creep rupture life prediction system, and the results are shown in Table 3, which show the values of evaluation indicators RMSE, MAPE, and R2 of each model in three categories of methods (time–temperature parametric models/machine learning models/composite models).
From the statistical values shown in Table 3, it is possible to observe the L-M model with the highest prediction accuracy and the M-H model with the lowest prediction accuracy among the four time–temperature parametric models for this set of Cr-Mo alloy creep data. Among the eight machine learning models, the GA-BPNN model has the highest prediction accuracy. Among the 32 composite models, the L-M+PSO-BPNN composite model possesses the highest prediction accuracy. Furthermore, among these models of three categories of methods, the most accurate one is the L-M+PSO-BPNN composite model.

3.3. Comparison of Effects of Different Input Variables on Creep Rupture Life

Distinguished among the models of material creep rupture life prediction, compared with time–temperature parametric models, machine learning models have the unique advantage of quantifying the influence of various input variables on material creep rupture life.
The random forest model in machine learning models is an effective and frequently used method of quantifying the feature importance of various input variables to the output.
The influence of different input variables on creep life is compared by calculating feature importance scores in the random forest model of machine learning models. The feature importance evaluation method of the random forest model involves calculating the contribution of each feature in each decision tree of the model and then using the average value of the contribution to derive the importance of each feature to the output results.
The influence of different input variables on the creep rupture life of 5Cr-0.5Mo and 1Cr-0.5Mo alloys is shown in Figure 17. It can be seen from Figure 17 that the top five alloy elements with high feature importance scores are Cu, Mn, Ni, Mo, and Al, indicating that the mass fraction content of these five elements has a more substantial effect on alloy creep rupture life than the other six elements.

4. Conclusions

(1)
In this paper, a new creep rupture life prediction method is proposed that obtains the parametric equation of creep rupture life, stress, and temperature using four different time–temperature parametric models. Then, the creep rupture life data of other temperature and stress conditions predicted via parametric equations are used as the expansion of the training set data of various machine learning models. The new method combines the advanced machine learning models with the classical time–temperature parametric models. This measure not only solves the problem that the machine learning model is difficult to use for small samples but also improves the prediction accuracy of the machine learning model;
(2)
Due to the different theories of various creep rupture life prediction models, the prediction results obtained using various prediction models are different, even for the same set of creep data. Additionally, the prediction abilities of models are variable, making it impossible to guarantee that a certain model will always have the strongest prediction ability for a variety of materials. Therefore, we propose a new creep rupture life prediction method in this paper that uses multiple models of three categories of methods simultaneously, compares the prediction accuracy of different models, outputs the predicted model values with the highest accuracy, and improves the prediction accuracy and applicability of the material creep rupture life prediction. The creep rupture life prediction method proposed in this paper can be further improved via the introduction of more machine learning models to further improve the prediction accuracy and applicability of the method;
(3)
Compared with the classical parametric models (L-M, M-S, G-D, and M-H), the unique advantage of the machine learning model is that it can quantify the feature importance of different input variables. However, in the case of small-sample creep data, the prediction accuracy of machine learning models is often low, leading to the reliability of quantitative feature importance scores also being low. The new method proposed in this paper can improve the prediction accuracy of machine learning models in the case of small samples and quantify the influence of different input variables on output more accurately and reliably.

Author Contributions

Conceptualization, J.Y.; Methodology, X.Z.; Validation, C.W. and H.L.; Formal analysis, Y.W. and X.L.; Investigation, Y.W.; Resources, H.L.; Writing—original draft, X.Z.; Writing—review & editing, J.Y., X.L. and C.W.; Visualization, C.W.; Supervision, X.L. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (Grant No. 2022YFF0706400), and Graduate Research and Innovation Foundation of Chongqing, China (Grant No. CYS22042).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Webb, J.; Gollapudi, S.; Charit, I. An overview of creep in tungsten and its alloys. Int. J. Refract. Met. Hard Mater. 2019, 82, 69–80. [Google Scholar] [CrossRef]
  2. Li, R.; Zhang, L. Research on high-temperature creep properties of Al2O3-MgAl2O4 refractory. Int. J. Appl. Ceram. Technol. 2022, 19, 2172–2180. [Google Scholar] [CrossRef]
  3. Sun, H.; Wang, H.; He, X.; Wang, F.; An, X.; Wang, Z. Study on high temperature creep behavior of the accident-resistant cladding Fe–13Cr–4Al-1.85 Mo-0.85 Nb alloy. Mater. Sci. Eng. A 2021, 802, 140688. [Google Scholar] [CrossRef]
  4. Wang, G.; Zhang, S.; Tian, S.; Tian, N.; Zhao, G.; Yan, H. Microstructure evolution and deformation mechanism of a [111]-oriented nickel-based single-crystal superalloy during high-temperature creep. J. Mater. Res. Technol. 2022, 16, 495–504. [Google Scholar] [CrossRef]
  5. Altenbach, H.; Öchsner, A. (Eds.) Encyclopedia of Continuum Mechanics; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  6. McLean, D. The physics of high temperature creep in metals. Rep. Prog. Phys. 1966, 29, 1. [Google Scholar] [CrossRef]
  7. Evans, R.W.; Wilshire, B. Introduction to creep. Inst. Mater. 1993, 1993, 115. [Google Scholar]
  8. Han, L.; Li, P.; Yu, S.; Chen, C.; Fei, C.; Lu, C. Creep/fatigue accelerated failure of Ni-based superalloy turbine blade: Microscopic characteristics and void migration mechanism. Int. J. Fatigue 2022, 154, 106558. [Google Scholar] [CrossRef]
  9. Meher-Homji, C.B.; Gabriles, G. Gas Turbine Blade Failures-Causes, Avoidance, And Troubleshooting. In Proceedings of the 27th Turbomachinery Symposium; Texas A&M University, Turbomachinery Laboratories: College Station, TX, USA, 1998. [Google Scholar]
  10. Ali, M.; Ul-Hamid, A.; Alhems, L.M.; Saeed, A. Review of common failures in heat exchangers–Part I: Mechanical and elevated temperature failures. Eng. Fail. Anal. 2020, 109, 104396. [Google Scholar] [CrossRef]
  11. Purbolaksono, J.; Ahmad, J.; Beng, L.C.; Rashid, A.Z.; Khinani, A.; Ali, A.A. Failure analysis on a primary superheater tube of a power plant. Eng. Fail. Anal. 2010, 17, 158–167. [Google Scholar] [CrossRef]
  12. Jones, D.R.H. Creep failures of overheated boiler, superheater and reformer tubes. Eng. Fail. Anal. 2004, 11, 873–893. [Google Scholar] [CrossRef]
  13. Perdomo, J.J.; Spry, T.D. An overheat boiler tube failure. J. Fail. Anal. Prev. 2005, 5, 25–28. [Google Scholar] [CrossRef]
  14. Kim, W.G.; Park, J.Y.; Kim, S.J.; Jang, J. Reliability assessment of creep rupture life for Gr. 91 steel. Mater. Des. 2013, 51, 1045–1051. [Google Scholar] [CrossRef]
  15. Niu, S.; Yu, Y.; Liu, Y.; Niu, Y.; Zhang, J. The Study of Creep Induced SGTR in Severe Accident for HPR1000. Nucl. Sci. Eng. 2021, 41, 48–56. [Google Scholar]
  16. Loghman, A.; Moradi, M. Creep damage and life assessment of thick-walled spherical reactor using Larson–Miller parameter. Int. J. Press. Vessel. Pip. 2017, 151, 11–19. [Google Scholar] [CrossRef]
  17. Lee, C.; Lee, T.; Choi, Y.S. Simple Data Analytics Approach Coupled with Larson–Miller Parameter Analysis for Improved Prediction of Creep Rupture Life. Met. Mater. Int. 2023, 1–12. [Google Scholar] [CrossRef]
  18. Pavan, R.; Srinivasan, P. Investigations on creep life of Alloy 617 material for the final stage superheater coils for ultra super critical thermal power plants. Mater. Today Proc. 2020, 28, 461–467. [Google Scholar] [CrossRef]
  19. Render, M.; Santella, M.L.; Chen, X.; Tortorelli, P.F.; Cedro, V. Long-term creep-rupture behavior of alloy Inconel 740/740H. Metall. Mater. Trans. A 2021, 52, 2601–2612. [Google Scholar] [CrossRef]
  20. Duoqi, S.; Tianxiao, S.U.I.; Zhenlei, L.I.; Xiaoguang, Y.A.N.G. An orientation-dependent creep life evaluation method for nickel-based single crystal superalloys. Chin. J. Aeronaut. 2022, 35, 238–249. [Google Scholar]
  21. Cedro III, V.; Garcia, C.; Render, M. Use of the Wilshire equation to correlate and extrapolate creep rupture data of Incoloy 800 and 304H stainless steel. Mater. High Temp. 2019, 36, 511–530. [Google Scholar] [CrossRef]
  22. Huang, Y.; Luo, X.; Zhan, Y.; Chen, Y.; Yu, L.; Feng, W.; Xiong, J.; Yang, J.; Mao, G.; Yang, L. High-temperature creep rupture behavior of dissimilar welded joints in martensitic heat resistant steels. Eng. Fract. Mech. 2022, 273, 108739. [Google Scholar] [CrossRef]
  23. Sourabh, K.; Singh, J.B. Creep behaviour of alloy 690 in the temperature range 800–1000 °C. J. Mater. Res. Technol. 2022, 17, 1553–1569. [Google Scholar] [CrossRef]
  24. Zhang, X.C.; Gong, J.G.; Xuan, F.Z. A deep learning based life prediction method for components under creep, fatigue and creep-fatigue conditions. Int. J. Fatigue 2021, 148, 106236. [Google Scholar] [CrossRef]
  25. Wang, J.; Fa, Y.; Tian, Y.; Yu, X. A machine-learning approach to predict creep properties of Cr–Mo steel with time-temperature parameters. J. Mater. Res. Technol. 2021, 13, 635–650. [Google Scholar] [CrossRef]
  26. Tan, Y.; Wang, X.; Kang, Z.; Ye, F.; Chen, Y.; Zhou, D.; Zhang, X.; Gong, J. Creep lifetime prediction of 9% Cr martensitic heat-resistant steel based on ensemble learning method. J. Mater. Res. Technol. 2022, 21, 4745–4760. [Google Scholar] [CrossRef]
  27. He, J.J.; Sandström, R.; Zhang, J.; Qin, H.Y. Application of soft constrained machine learning algorithms for creep rupture prediction of an austenitic heat resistant steel Sanicro 25. J. Mater. Res. Technol. 2023, 22, 923–937. [Google Scholar] [CrossRef]
  28. Xiang, S.; Chen, X.; Fan, Z.; Chen, T.; Lian, X. A deep learning-aided prediction approach for creep rupture time of Fe–Cr–Ni heat-resistant alloys by integrating textual and visual features. J. Mater. Res. Technol. 2022, 18, 268–281. [Google Scholar] [CrossRef]
  29. Zhu, Y.; Duan, F.; Yong, W.; Fu, H.; Zhang, H.; Xie, J. Creep rupture life prediction of nickel-based superalloys based on data fusion. Comput. Mater. Sci. 2022, 211, 111560. [Google Scholar] [CrossRef]
  30. Liu, Y.; Wu, J.; Wang, Z.; Lu, X.G.; Avdeev, M.; Shi, S.; Wang, C.; Yu, T. Predicting creep rupture life of Ni-based single crystal superalloys using divide-and-conquer approach based machine learning. Acta Mater. 2020, 195, 454–467. [Google Scholar] [CrossRef]
  31. Kong, B.O.; Kim, M.S.; Kim, B.H.; Lee, J.H. Prediction of creep life using an explainable artificial intelligence technique and alloy design based on the genetic algorithm in creep-strength-enhanced ferritic 9% Cr steel. Met. Mater. Int. 2023, 29, 1334–1345. [Google Scholar] [CrossRef]
  32. Han, H.; Li, W.; Antonov, S.; Li, L. Mapping the creep life of nickel-based SX superalloys in a large compositional space by a two-model linkage machine learning method. Comput. Mater. Sci. 2022, 205, 111229. [Google Scholar] [CrossRef]
  33. Khatavkar, N.; Singh, A.K. Highly interpretable machine learning framework for prediction of mechanical properties of nickel based superalloys. Phys. Rev. Mater. 2022, 6, 123603. [Google Scholar] [CrossRef]
  34. Feng, J.; Zhang, H.; Gao, K.; Liao, Y.; Gao, W.; Wu, G. Efficient creep prediction of recycled aggregate concrete via machine learning algorithms. Constr. Build. Mater. 2022, 360, 129497. [Google Scholar] [CrossRef]
  35. Wang, C.; Wei, X.; Ren, D.; Wang, X.; Xu, W. High-throughput map design of creep life in low-alloy steels by integrating machine earning with a genetic algorithm. Mater. Des. 2022, 213, 110326. [Google Scholar] [CrossRef]
  36. Larson, F.R.; Miller, J. A time temperature relationship for rupture and creep stress-es. Trans. AME 1952, 74, 765–775. [Google Scholar]
  37. Manson, S.S.; Succop, G. Stress-rupture properties of Inconel 700 and correlation on the basis several time temperature parameters. ASTM STP 174 1956, 64, 1–10. [Google Scholar]
  38. Kim, W.G.; Yoon, S.N.; Ryu, W.S. Application and standard error analysis of the parametric methods for predicting the creep life of type 316LN SS. Key Eng. Mater. 2005, 297, 2272–2277. [Google Scholar] [CrossRef]
  39. Yuan, L.; Chenglong, W.; Yongchao, S.; Mingbo, S.; Yu, Y.; Zhan, G.; Yiwen, X. Review on Creep Phenomenon and Its Model in Aircraft Engines. Int. J. Aerosp. Eng. 2023, 2023, 4465565. [Google Scholar] [CrossRef]
  40. Manson, S.S.; Haferd, A.M. A Linear Time Temperature Relation for Extrapolation of Creep and Stress Rupture Data. 1953, 62, 2890–2893. Available online: https://digital.library.unt.edu/ark%3A/67531/metadc56933/m2/1/high_res_d/19930083803.pdf (accessed on 21 December 2022).
  41. Mamun, O.; Wenzlick, M.; Sathanur, A.; Hawk, J.; Devanathan, R. Machine learning augmented predictive and generative model for rupture life in ferritic and austenitic steels. Npj Mater. Degrad. 2021, 5, 20. [Google Scholar] [CrossRef]
  42. Wang, L.; Liu, X.; Fan, P.; Zhu, L.; Zhang, K.; Wang, K.; Song, C.; Ren, S. A creep life prediction model of P91 steel coupled with back-propagation artificial neural network (BP-ANN) and θ projection method. Int. J. Press. Vessel. Pip. 2023, 206, 105039. [Google Scholar] [CrossRef]
  43. Gao, J.; Tong, Y.; Zhang, H.; Zhu, L.; Hu, Q.; Hu, J.; Zhang, S. Machine learning assisted design of Ni-based superalloys with excellent high-temperature performance. Mater. Charact. 2023, 198, 112740. [Google Scholar] [CrossRef]
  44. Hu, Y.; Li, J.; Hong, M.; Ren, J.; Lin, R.; Liu, Y.; Liu, M.; Man, Y. Short term electric load forecasting model and its verification for process industrial enterprises based on hybrid GA-PSO-BPNN algorithm—A case study of papermaking process. Energy 2019, 170, 1215–1227. [Google Scholar] [CrossRef]
  45. Zhou, H.; Huang, S.; Zhang, P.; Ma, B.; Ma, P.; Feng, X. Prediction of jacking force using PSO-BPNN and PSO-SVR algorithm in curved pipe roof. Tunn. Undergr. Space Technol. 2023, 138, 105159. [Google Scholar] [CrossRef]
  46. Cai, B.; Pan, G.; Fu, F. Prediction of the postfire flexural capacity of RC beam using GA-BPNN machine learning. J. Perform. Constr. Facil. 2020, 34, 04020105. [Google Scholar] [CrossRef]
  47. Zhang, Y.; Aslani, F.; Lehane, B. Compressive strength of rubberized concrete: Regression and GA-BPNN approaches using ultrasonic pulse velocity. Constr. Build. Mater. 2021, 307, 124951. [Google Scholar] [CrossRef]
  48. Liu, Z.; Liu, X.; Wang, K.; Liang, Z.; Correia, J.A.; De Jesus, A.M. GA-BP neural network-based strain prediction in full-scale static testing of wind turbine blades. Energies 2019, 12, 1026. [Google Scholar] [CrossRef]
  49. Dhanalakshmi, P.; Palanivel, S.; Ramalingam, V. Classification of audio signals using SVM and RBFNN. Expert Syst. Appl. 2009, 36, 6069–6075. [Google Scholar] [CrossRef]
  50. Zhang, Q.; Gao, J.; Dong, H.; Mao, Y. WPD and DE/BBO-RBFNN for solution of rolling bearing fault diagnosis. Neurocomputing 2018, 312, 27–33. [Google Scholar] [CrossRef]
  51. Hu, Y.; You, J.J.; Liu, J.N.K.; He, T. An eigenvector based center selection for fast training scheme of RBFNN. Inf. Sci. 2018, 428, 62–75. [Google Scholar] [CrossRef]
  52. Biau, G. Analysis of a random forests model. J. Mach. Learn. Res. 2012, 13, 1063–1095. [Google Scholar]
  53. Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [Google Scholar] [CrossRef]
  54. Cutler, A.; Cutler, D.R.; Stevens, J.R. Random forests. In Ensemble Machine Learning: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 2012; pp. 157–175. [Google Scholar]
  55. Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
  56. Zhang, F.; O’Donnell, L.J. Support vector regression. In Machine Learning; Academic Press: Cambridge, MA, USA, 2020; pp. 123–140. [Google Scholar]
  57. Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  58. Quinonero-Candela, J.; Rasmussen, C.E. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res. 2005, 6, 1939–1959. [Google Scholar]
  59. Schulz, E.; Speekenbrink, M.; Krause, A. A tutorial on Gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol. 2018, 85, 1–16. [Google Scholar] [CrossRef]
  60. Le Roux, N.; Bengio, Y. Representational power of restricted Boltzmann machines and deep belief networks. Neural Comput. 2008, 20, 1631–1649. [Google Scholar] [CrossRef]
  61. Kuremoto, T.; Kimura, S.; Kobayashi, K.; Obayashi, M. Time series forecasting using a deep belief network with restricted Boltzmann machines. Neurocomputing 2014, 137, 47–56. [Google Scholar] [CrossRef]
  62. Sawada, K.; Kimura, K.; Abe, F.; Taniuchi, Y.; Sekido, K.; Nojima, T.; Ohba, T.; Kushima, H.; Miyazaki, H.; Hongo, H. Catalog of NIMS creep data sheets. Sci. Technol. Adv. Mater. 2019, 20, 1131–1149. [Google Scholar] [CrossRef]
Figure 1. The theoretical diagram of the Larson−Miller parametric model.
Figure 1. The theoretical diagram of the Larson−Miller parametric model.
Materials 16 06804 g001
Figure 2. The theoretical diagram of the Manson–Succop parametric model.
Figure 2. The theoretical diagram of the Manson–Succop parametric model.
Materials 16 06804 g002
Figure 3. The theoretical diagram of the Ge–Dorn parametric model.
Figure 3. The theoretical diagram of the Ge–Dorn parametric model.
Materials 16 06804 g003
Figure 4. The theoretical diagram of the Manson–Haferd parametric model.
Figure 4. The theoretical diagram of the Manson–Haferd parametric model.
Materials 16 06804 g004
Figure 5. The schematic diagram of the method combining the machine learning models with the time–temperature parametric models.
Figure 5. The schematic diagram of the method combining the machine learning models with the time–temperature parametric models.
Materials 16 06804 g005
Figure 6. The flow diagram of a new creep rupture life prediction method.
Figure 6. The flow diagram of a new creep rupture life prediction method.
Materials 16 06804 g006
Figure 7. The fitting curves of four parametric models. (a) L-M model, (b) M-S model, (c) G-D model, (d) M-H model.
Figure 7. The fitting curves of four parametric models. (a) L-M model, (b) M-S model, (c) G-D model, (d) M-H model.
Materials 16 06804 g007
Figure 8. The comparison diagram of the creep data amount of various creep rupture life intervals before and after expansion.
Figure 8. The comparison diagram of the creep data amount of various creep rupture life intervals before and after expansion.
Materials 16 06804 g008
Figure 9. The comparison of prediction results between the L-M parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Figure 9. The comparison of prediction results between the L-M parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Materials 16 06804 g009
Figure 10. The values of three evaluation indicators of the L-M parametric model, machine learning models, and composite models.
Figure 10. The values of three evaluation indicators of the L-M parametric model, machine learning models, and composite models.
Materials 16 06804 g010
Figure 11. The comparison of prediction results between the M-S parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Figure 11. The comparison of prediction results between the M-S parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Materials 16 06804 g011
Figure 12. The values of three evaluation indicators of the M-S parametric model, machine learning models, and composite models.
Figure 12. The values of three evaluation indicators of the M-S parametric model, machine learning models, and composite models.
Materials 16 06804 g012
Figure 13. The comparison of prediction results between the G-D parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Figure 13. The comparison of prediction results between the G-D parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Materials 16 06804 g013
Figure 14. The values of three evaluation indicators of the G-D parametric model, machine learning models, and composite models.
Figure 14. The values of three evaluation indicators of the G-D parametric model, machine learning models, and composite models.
Materials 16 06804 g014
Figure 15. The comparison of prediction results between the M-H parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Figure 15. The comparison of prediction results between the M-H parametric model, machine learning models, and composite models. xxx (a) PSO−BPNN (b) GA−BPNN; (c) RBFNN; (d) RF; (e) SVR; (f) DNN; (g) GPR; (h) DBN.
Materials 16 06804 g015
Figure 16. The values of three evaluation indicators of the M-H parametric model, machine learning models, and composite models.
Figure 16. The values of three evaluation indicators of the M-H parametric model, machine learning models, and composite models.
Materials 16 06804 g016
Figure 17. Feature importance scores of different input variables.
Figure 17. Feature importance scores of different input variables.
Materials 16 06804 g017
Table 1. Partial creep test data of the machine learning training set.
Table 1. Partial creep test data of the machine learning training set.
Chemical FormulaT/°Cσ/MPaChemical Composition (wt.%)lg(t)
CSiMnPSNiCrMoCuAlN
5Cr-0.5Mo550880.10.270.450.0140.00604.310.590.10.0020.01643.526080692
550640.10.270.450.0140.00604.310.590.10.0020.01644.424718337
600980.10.270.450.0140.00604.310.590.10.0020.01641.886490725
600690.10.270.450.0140.00604.310.590.10.0020.01642.752816431
1Cr-0.5Mo4504220.140.250.570.0110.0090.150.960.530.140.0050.00982.256958153
4504120.140.250.570.0110.0090.150.960.530.140.0050.00982.682686478
650410.140.250.550.0120.0110.140.910.540.140.0170.00982.831741834
650290.140.250.550.0120.0110.140.910.540.140.0170.00983.530814194
Table 2. The values of coefficients and goodness of four fitting curve functions.
Table 2. The values of coefficients and goodness of four fitting curve functions.
ModelCubic Term (a3)Quadratic Term (a2)First Power Term (a1)Constant Term (a0)Goodness of Fit
L-M0.73077384−8.7850095334.21754956−41.289947360.98204
M-S0.00475337475−0.38971049610.4841186−90.74549290.98763
G-D0.00313203270.2137188714.7069398335.05633930.97758
M-H−73854.1678−871.44915878.51900392.77535780.97617
Table 3. The values of evaluation indicators of each model in the creep life prediction method.
Table 3. The values of evaluation indicators of each model in the creep life prediction method.
The Category of the ModelModel R-SquaredRMSEMAPE
Time-temperature parametric modelsL-M0.899150.260790.05516
M-S0.816040.362740.08812
G-D0.829240.349490.07380
M-H−0.002290.822130.19542
Machine learning modelsPSO-BPNN0.839870.328600.08606
GA-BPNN0.871290.294620.07819
RBFNN0.658210.480090.13802
RF0.421970.624330.17128
SVR0.737610.420650.11426
DNN0.726090.429780.10071
GPR0.869980.296110.07653
DBN0.724380.431120.11329
Composite modelsL-M + PSO-BPNN0.988550.087860.02033
L-M + GA-BPNN0.977150.124130.03246
L-M + RBFNN0.953640.176820.04506
L-M + RF0.986080.096880.02512
L-M + SVR0.971790.137930.03308
L-M + DNN0.979020.118950.02249
L-M + GPR0.963170.157590.03465
L-M + DBN0.977970.121890.02823
M-S + PSO-BPNN0.717480.436480.11446
M-S + GA-BPNN0.802450.364990.09021
M-S + RBFNN0.908210.248800.06756
M-S + RF0.835550.333010.08134
M-S + SVR0.817230.351070.08621
M-S + DNN0.810550.357430.10176
M-S + GPR0.813420.354710.08910
M-S + DBN0.922240.228990.05187
G-D + PSO-BPNN0.917110.236420.06321
G-D + GA-BPNN0.893710.267720.07017
G-D + RBFNN0.965820.151820.03663
G-D + RF0.967350.148380.03966
G-D + SVR0.969020.144530.03223
G-D + DNN0.786240.379670.08100
G-D + GPR0.778200.386750.09104
G-D + DBN0.973520.133640.03737
M-H + PSO-BPNN0.151210.756560.19010
M-H + GA-BPNN0.199090.734910.17209
M-H + RBFNN0.560020.544700.12729
M-H + RF0.276590.698450.14541
M-H + SVR0.473760.595710.15259
M-H + DNN0.327380.673490.16517
M-H + GPR0.230940.720150.15742
M-H + DBN0.338780.667750.15937
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Yao, J.; Wu, Y.; Liu, X.; Wang, C.; Liu, H. A Method for Predicting the Creep Rupture Life of Small-Sample Materials Based on Parametric Models and Machine Learning Models. Materials 2023, 16, 6804. https://doi.org/10.3390/ma16206804

AMA Style

Zhang X, Yao J, Wu Y, Liu X, Wang C, Liu H. A Method for Predicting the Creep Rupture Life of Small-Sample Materials Based on Parametric Models and Machine Learning Models. Materials. 2023; 16(20):6804. https://doi.org/10.3390/ma16206804

Chicago/Turabian Style

Zhang, Xu, Jianyao Yao, Yulin Wu, Xuyang Liu, Changyin Wang, and Hao Liu. 2023. "A Method for Predicting the Creep Rupture Life of Small-Sample Materials Based on Parametric Models and Machine Learning Models" Materials 16, no. 20: 6804. https://doi.org/10.3390/ma16206804

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop