Next Article in Journal
Fatigue of Cold Recycled Cement-Treated Pavement Layers: Experimental and Modeling Study
Next Article in Special Issue
Research on a New Plant Fiber Concrete-Light Steel Keel Wall Panel
Previous Article in Journal
Factors Influencing Game-Based Learning in the Colombian Context: A Mixed Methods Study
Previous Article in Special Issue
Research on the Bearing Capacity and Sustainable Construction of a Vacuum Drainage Pipe Pile
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning-Based Prediction of Elastic Buckling Coefficients on Diagonally Stiffened Plate Subjected to Shear, Bending, and Compression

1
School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing 100083, China
2
School of Civil and Transportation Engineering, Hebei University of Technology, Tianjin 300401, China
*
Authors to whom correspondence should be addressed.
Sustainability 2023, 15(10), 7815; https://doi.org/10.3390/su15107815
Submission received: 15 March 2023 / Revised: 27 April 2023 / Accepted: 2 May 2023 / Published: 10 May 2023
(This article belongs to the Special Issue Sustainable Structures and Construction in Civil Engineering)

Abstract

:
The buckling mechanism of diagonally stiffened plates under the combined action of shear, bending, and compression is a complex phenomenon that is difficult to describe with simple and clear explicit expressions. Predicting the elastic buckling coefficient accurately is crucial for calculating the buckling load of these plates. Several factors influence the buckling load of diagonally stiffened plates, including the plate’s aspect ratio, the stiffener’s flexural and torsional rigidity, and the in-plane load. Traditional analysis methods rely on fitting a large number of finite element numerical simulations to obtain an empirical formula for the buckling coefficient of stiffened plates under a single load. However, this cannot be applied to diagonally stiffened plates under combined loads. To address these limitations, several machine learning (ML) models were developed using the ML method and the SHAP to predict the buckling coefficient of diagonally stiffened plates. Eight ML models were trained, including decision tree (DT), k-nearest neighbor (K-NN), artificial neural network (ANN), random forest (RF), AdaBoost, LightGBM, XGBoost, and CatBoost. The performance of these models was evaluated and found to be highly accurate in predicting the buckling coefficient of diagonally stiffened plates under combined loading. Among the eight models, XGBoost was found to be the best. Further analysis using the SHAP method revealed that the aspect ratio of the plate is the most important feature influencing the elastic buckling coefficient. This was followed by the combined action ratio, as well as the flexure and torsional rigidity of the stiffener. Based on these findings, it is recommended that the stiffener-to-plate flexural stiffness ratio be greater than 20 and that the stiffener’s torsional-to-flexural stiffness ratio be greater than 0.4. This will improve the elastic buckling coefficient of diagonally stiffened plates and enable them to achieve higher load capacity.

1. Introduction

Stiffened plates are widely utilized in various fields such as civil engineering, automotive, shipbuilding, and aerospace [1,2,3]. The current research focuses on the buckling stability of stiffened plates under in-plane loading [4,5,6,7,8]. In structural engineering, stiffeners are used in beams, plates, shear walls, and other major structural members to improve structural performance [9]. The common forms of stiffened plates include transverse, longitudinal, and diagonal. In 1960, Timoshenko and Gere [10] systematically studied the buckling behavior of rectangular plates and derived the equilibrium equations of rectangular plates under single loads such as compression, bending, and shear. In addition, the relationship between flexure rigidity of stiffeners and critical buckling stress was analyzed for transversely and longitudinally stiffened plates, and the limit of the stiffness ratio was given, thus establishing the foundation for the research of buckling stability of stiffened plates. Mikami [11] and Yonezawa [12] investigated the buckling behavior of diagonally stiffened plates simply supported on four edges. Yuan et al. [13] considered the flexural rigidity of the open diagonal stiffeners, and studied the shear behavior of diagonally stiffened stainless-steel girders using the finite element (FE) software. Martins and Cardoso [14] also investigated how the torsional stiffness of closed-section stiffeners affects the shear buckling behavior of diagonally stiffened plates.
Most current research focuses on the shear buckling behavior of diagonally stiffened plates, and there is a lack of sufficient research on the combined action of multiple loads. This discourages the design of stiffened plates. There is an urgent need to propose a simple and convenient model for the calculation of the buckling performance of diagonally stiffened plates under combined compression-bending-shear action.
On the other hand, new types of composite structures have emerged in recent years. Because many variables affect their mechanical properties, it is difficult to solve their analytical solutions directly. It is also hard to fully reflect the influence of different parameters that affect a structure’s performance when using semi-theoretical and semi-experiential methods. Machine learning (ML) can reveal the hidden relationship between input features and prediction results. It can be used to predict the load-bearing capacity, failure mode, and damage assessment of members with excellent accuracy [15,16,17]. Using ML to predict the bearing capacity of complex structures is one of its important research objectives, such as the punching strength of concrete slabs [18], axial compressive strength of concrete-filled steel tube (CFST) columns [19], buckling loads of perforated steel beams [20], material bond strength [21], concrete splitting and tensile strength [22], and rebound modulus properties [23]. The nonlinear hysteretic response behavior of steel plate shear walls can be simulated using a deep learning method [24]. Furthermore, the SHAP (Shapley Additive exPlanations) approach [25] can effectively explain how each feature contributed to the results and increase the credibility of the ML model [26].
There is a lack of research on the performance of diagonally stiffened plates under complex actions, especially under the coupled action of compression, bending, and shear. There is also a lack of relevant theoretical and design basis. Therefore, the central problem of this study is to determine the elastic buckling coefficients of diagonally stiffened plates under the combined action of bending, compression, and shear. In this paper, the data set affecting the buckling performance of diagonally stiffened plates was established, the performance metrics of different machine learning algorithms in predicting the elastic buckling coefficient were compared, and the optimal model was interpreted using the SHAP method. The results show that integrated learning exhibits a good generalization ability and can quickly predict the elastic buckling coefficients of diagonally stiffened plates under the combined action of compression–bending–shear.

2. Methods

2.1. Buckling Coefficient of Diagonally Stiffened Plate

Timoshenko and Gere [10] studied the elastic stability of unstiffened plates using the energy method and found that the aspect ratio γ of the plate significantly affects the critical buckling stress τcr of the structure, as given in Equation (1), where kcr is a function with respect γ. Then, according to Bleich [27], the buckling coefficients kcr for a four-edge simply supported plate under shear and bending loads alone are calculated by the approximations Equation (2) and Equation (3), respectively. For example, for a rectangular plate with four-sided simple support, the shear buckling coefficient is 9.34 and the bending buckling coefficient is 23.9.
τ cr = k cr η 2 D H 2 t = k c r η 2 E 12 ( 1 ν 2 ) λ 2 ,   where   D = E t 3 12 ( 1 ν 2 ) , λ = H t
For   shear :   k cr = 5.34 + 4 / γ 2     γ 1 4 + 5.34 / γ 2     γ < 1
For   bending :   k cr = 15.87 + 1.87 / γ 2 + 8.6 γ 2    ( γ < 2 / 3 ) 23.9            ( γ 2 / 3 )
γ = L H
where L, H, and t are the plate’s length, height, and thickness, respectively; D is the plate’s flexural stiffness; E is the elastic modulus of the plate; kcr is the elastic buckling coefficient; ν is Poisson’s ratio; and λ is the slenderness ratio of the plate.
Tong [28] analyzed the buckling stability of a four-edge simply supported plate under combined shear and non-uniform compression, resulting in the interaction formula for the plate combined action of axial compression, bending, and shear as Equation (5). It is possible to separate the non-uniform compression into a combined bending and axial compression action. A coefficient is defined to represent the form of the distribution of non-uniform axial compression. The distribution factor of non-uniform compression ζ is shown in Equation (7). More information is detailed later in Section 3.3.
σ σ cr + σ b σ bcr 2 + τ τ cr 2 = 1
σ = 1 0.5 ζ σ max σ b = 0.5 ζ σ max
ζ = σ max σ min σ max
where σ, σb and τ are the stress of axial compression, bending, and shear, respectively; σcr, σbcr and τcr are the critical stresses in axial compression, bending, and shear alone, respectively; and σmax and σmin are the maximum and minimum stresses in non-uniform compression, respectively.
Diagonally stiffened plates can be classified as either tension-type or compression-type, depending on the direction of the diagonal stiffeners and the shear force. Under pure shear, stiffeners are subjected to pressure and are called compression-type, while those subjected to tension are called tension-type. Compared to unstiffened plates, diagonally stiffened plates have a larger buckling coefficient, with compression type having the highest buckling coefficient. Mikami [11] assumed that the diagonally stiffened edge is ideally restrained from out-of-plane displacement, meaning the stiffeners have infinite flexural stiffness. Based on the resultant curves of the numerical analysis, equations were fitted to calculate the buckling coefficients of diagonally stiffened plates in pure shear and bending alone. The shear buckling coefficient of diagonally stiffened plates can be approximately expressed by Equation (8), and the bending buckling coefficient of diagonally stiffened plates can be approximated by Equation (9).
For   shear :   k cr = 11.9 + 10.1 / γ + 10.9 / γ 2 compression 17.2 22.5 / γ + 16.7 / γ 2 tension
For   bending :   k cr = 22.5 + 4.23 γ + 2.75 γ 2  
Considering the flexure rigidity of stiffeners, the critical elastic buckling stress obtained from the FE analysis was compared to the structure’s theoretical buckling stress [13]. As a result, the equation for estimating the elastic buckling coefficient of the diagonally stiffened plate was modified as Equation (10), where Is is the moment of inertia of the stiffeners.
k c r = 24.2 γ 2 · I s h t 3 3 + 6.57 γ 2 + 4.92 f o r   I s h t 3 < 1 γ < 1   24.2 γ 2 · I s h t 3 3 + 4.92 γ 2 + 6.57 f o r   I s h t 3 < 1 γ 1 9 γ 2 · I s h t 3 3 + 6.57 γ 2 + 15.2 γ + 4.92 f o r   I s h t 3 1 γ < 1 9 γ 2 · I s h t 3 3 + 4.92 γ 2 + 15.2 γ + 6.57 f o r   I s h t 3 1 γ 1
Martins and Cardoso [14] investigated the effect of the torsional rigidity of stiffeners on the elastic buckling stress. Based on a nonlinear regression analysis of a large number of numerical simulation results, Equations (11) and (12) were provided to predict the shear buckling coefficients of open and closed diagonally stiffened plates, respectively. In these equations, ηy is the relative flexural rigidity around the strong axis, ηz is the relative flexural rigidity around the weak axis, and φx is the relative torsional rigidity for the stiffeners.
k c r = 4.00 + 5.34 γ 2 + 11.50 η y 3 + 5.23 · η y 0.404 · η z 0.038 γ 1.371 f o r   η y 12 ( 1 v 2 ) 0.5 < γ 1.0 5.34 + 4.00 γ 2 + 4.07 η y 3 + 6.15 · η y 0.428 · η z 0.020 γ 1.434 f o r   η y 12 ( 1 v 2 ) 1.0 < γ 2.0 18.35 + 5.67 γ + 3.35 γ 2 17.96 η y 3 + 10.16 · η y 0.026 · η z 0.076 γ 1.788 f o r   η y > 12 ( 1 v 2 ) 0.5 < γ 1.0 12.03 + 9.96 γ + 11.45 γ 2 14.29 η y 3 + 6.92 · η z 0.375 γ 1.632 · η y 0.036 f o r   η y > 12 ( 1 v 2 ) 1.0 < γ 2.0
k c r = 4.00 + 5.34 γ 2 + 5.03 η y 3 + 1.46 ϕ x 3 + 5.95 · η y 0.344 · η z 0.086 γ 1.557 f o r   η y 12 ( 1 v 2 ) 0.5 < γ 1.0 5.34 + 4.00 γ 2 3.85 η y 3 1.25 ϕ x 3 + 6.81 · η y 0.395 · η z 0.011 γ 1.083 f o r   η y 12 ( 1 v 2 ) 1.0 < γ 2.0 24.49 + 13.62 γ 2 17.01 η y 3 + 56.50 η z 3 64.40 ϕ x 3 + 2.19 · η z 0.641 γ 1.942 · η y 0.211 f o r   η y > 12 ( 1 v 2 ) 0.5 < γ 1.0 17.34 + 16.48 γ 2 10.70 η y 3 + 25.62 η z 3 29.04 ϕ x 3 + 2.95 · η z 0.563 γ 1.578 · η y 0.170 f o r   η y > 12 ( 1 v 2 ) 1.0 < γ 2.0
In the study of the buckling coefficient of diagonally stiffened plates, researchers have progressed from assuming that the stiffeners are rigid to considering the flexural rigidity and torsional rigidity. This consideration of additional factors brings the analysis results closer to reality. Through extensive finite element parameter analysis, scholars have derived formulas for calculating shear buckling coefficients. However, the loading conditions of the structure, in reality, are complex and can involve the combined action of bending, shear, and compression. Therefore, studying the buckling coefficient of stiffened plates under combined loads is an urgent problem.

2.2. Buckling Mode of Diagonally Stiffened Plate Subjected to Pure Shear

A diagonally stiffened plate with closed cross-section stiffeners, as illustrated in Figure 1, where L is the width of the plate, H is the height of the plate, t is the thickness of the plate, b is the height of the flange of closed cross-section stiffener, bs is the width of the web of closed cross-section stiffener, and ts is the thickness of closed cross-section stiffener.
The stiffener-to-plate flexural stiffness ratio η is used to quantify the relationship between the out-of-plate relative flexural rigidity of the stiffener and the plate, defined as Equation (13). The stiffener’s torsional-to-flexural rigidity ratio K is defined as Equation (14).
η = E s I s D L e ,   where   L e = H sin α s + L cos α s
K = G s J s E s I s
Previous work [29] analyzed square plates with diagonal stiffeners and found the flexural rigidity of the stiffeners affects the buckling mode of the diagonally stiffened plate (Figure 2). For the compressive type, the buckling coefficient increases continuously with an increasing stiffener-to-plate flexural stiffness ratio η. When η is small, the stiffened plate buckles as a whole, and its elastic shear buckling coefficient is lower than the value kcr0 = 32.9 calculated by Yonezawa’s formula Equation (8). As stiffener stiffness increases, the shear buckling coefficient gradually increases, and the buckling mode shifts to local buckling. The stiffness of the stiffeners that changes the structure from overall buckling to local buckling is called the threshold stiffness ηTh. Designers need to ensure that the stiffener has sufficient stiffness to avoid the overall buckling of the structure. When the stiffener stiffness exceeded ηTh, the buckling load of the structure continues to increase, and this contribution is provided by the stiffener. In addition, for the tensile type, the stiffener is subjected to tensile stresses and there is no stability problem, so the threshold stiffness ηTh is not meaningful for the tensile type diagonally stiffened plate. When the ratio η exceeds a certain value, the structure changes from overall to local buckling and the buckling load remains stable and stops increasing.
Previous research has mainly focused on the buckling behavior of diagonally stiffened plates under shear loading. However, the buckling behavior of diagonally stiffened plates under combined loading depends on several variables, such as geometrical dimensions and loading conditions, which are difficult to express by a simple formula. For this reason, a machine learning algorithm based on data-driven is used to solve this problem and reveal the potential mapping relationships between the influencing factors and the target.

2.3. Machine Learning Algorithms

The machine learning architecture for predicting the elastic buckling coefficient is illustrated in Figure 3. Several machine learning algorithms were used, including decision tree (DT), random forest (RF), k-nearest neighbor (K-NN), boosting ensemble learning (AdaBoost, XGBoost, LightGBM, and CatBoost), and artificial neural network (ANN). The dataset was divided into a test set and a training set. The training set was used to train the ML models, while the test set was used to evaluate the performance. Furthermore, the SHAP method is utilized to quantify the contribution of the features and to reveal the pattern of their influence on the predictions.
(1) Decision tree
A decision tree (DT) is a typical supervised machine learning algorithm, including the root node, decision nodes, and terminal nodes. The feature space is recursively partitioned based on a splitting attribute up until all samples correspond to the same class or there are no features that need to be split. The depth of the tree has a significant impact on the DT model’s computation time and complexity. Therefore, the maximum depth is the key hyperparameter that needs to be adjusted in the DT model. Furthermore, the maximum number of leaf nodes, the minimum number of segmented samples, and other key hyperparameters are also included.
(2) K-nearest neighbor
K-nearest neighbor (K-NN) is a nonparametric machine learning algorithm [21]. It predicts output values by interpolating the output values of k nearest neighbors in the training set, and the output value can be measured by the Minkowski metric, where p = 1 and 2 correspond to Euclidean and Manhattan distances, respectively. Therefore, one key hyperparameter that needs to be fine-tuned in the K-NN model is the number of nearest neighbors.
(3) Artificial neural network
An artificial neural network (ANN) is a mathematical model that simulates the neural system of the human brain [30], also called Multilayer Perceptron (MLP). It consists of one input layer, one or more hidden layers, and one output layer. Multiple neurons are present in each layer. The behavior of each neuron unit is defined by the weights assigned to it. The weights of the neurons are modified by iteration over training data to minimize the error between the predicted and the actual output. The value of the transfer function then checks whether the node should transmit data to the output layer by an activation function. The hyperparameters of the ANN include the number of hidden layers, the number of neurons in each hidden layer, the type of activation function (such as sigmoid, tanh, and relu), and the learning rate. The computational accuracy of the model can be enhanced by adding more hidden layers and neurons, but the cost of computing also rises.
(4) Random forest
Random forest (RF) is a bagging method of ensemble learning. Ensemble learning can generate multiple prediction models and then combine them into a strong learner according to certain rules, which has been proven to be significantly better than single learning methods such as K-NN and DT. RF is a method, consisting of multiple DTs, which randomly selects features to construct independent trees and averages the results of all trees in the forest [31]. RF can be trained more quickly than DT and also lowers the risk of overfitting in DT.
(5) Boosting algorithm
Boosting is another ensemble learning algorithm. There are four main boosting algorithms, namely Adaptive Boosting (AdaBoost) [32], Light Gradient Boosting Machine (LightGBM) [33], Extreme Gradient Boosting (XGBoost) [34], and Categorical Boosting (CatBoost) [35]. In the initial training step of AdaBoost, all instances are weighted equally. Then, the weights of samples misclassified by the previous basic classifier are increased, while the weights of correctly classified samples are decreased and used again to train the next basic classifier. These single models are weighted together to create the final integrated model. Both LightGBM and XGBoost are based on the loss of the gradient vector function to correct DT. LightGBM performs tree splitting based on leaves, which can reduce the error by detecting key points and stopping the computation, improving accuracy and speed. The loss function of XGBoost adds a regularization term to the gradient boosting tree to control the complexity of the model and avoid overfitting. The base model used by CatBoost is an oblivious tree, which is characterized by using the same features for each layer to split and is capable of handling gradient bias and prediction bias, hence avoiding the overfitting problem and obtaining a higher accuracy.

2.4. Evaluation

Machine learning is primarily used to solve classification and regression problems. Evaluation metrics for regression focus on the difference between predicted and true values. Common evaluation metrics for regression include root mean square error (RMSE), mean absolute error (MAE), mean absolute percent error (MAPE), coefficient of determination (R2), and so on. These metrics are listed in Table 1.

2.5. Explanation of ML Model by SHAP

Explainable ML models can help us understand the mechanisms involved in ML models and improve the credibility of the developed ML models [23,36]. SHAP is a game theoretic approach to explain the output of any machine learning model [25]. SHAP performs model explanation by calculating that each sample or feature contributes to the corresponding predicted value. The SHAP value can be written as
g z = ϕ 0 + i = 1 M ϕ i z i
where the value of z’ is 0 or 1, 1 means the feature is the same as the value in the explanation and 0 means missing. ϕ0 is a constant, which means the average prediction. ϕi is the SHAP contribution value of each feature. M is the number of input features.

3. Influencing Features

3.1. Geometric Properties

The diagonally stiffened plate’s geometric parameters included the plate’s width L, height H, the thickness t, and the closed cross-section stiffeners’ web width b, the flange width bs, and the thickness ts, as shown in Figure 1. If these six geometric parameters are taken as input features, then the dimension of ML input parameters will be large. High-dimensional datasets have a negative impact on machine learning. On the other hand, during finite element analysis, several parameters are usually fixed as constants and other parameters as variables. It is also convenient to get the relationship between input and output. According to the shear buckling performance of diagonally stiffened plates, the main influence parameters could be the aspect ratio γ, the stiffener-to-plate flexural stiffness ratio η, and the stiffener’s torsional-to-flexural rigidity ratio K. These three parameters are calculated by Equations (4), (13) and (14), respectively. These three ratio features are functions of six geometric parameters, which retain the characteristic information of geometric parameters and reduce the input feature’s dimension.

3.2. Mechanical Characteristics

The material characteristics that affect the buckling load of the stiffened plate are the modulus of elasticity for the plate E and the stiffeners Es, and the Poisson’s ratio ν. This paper takes steel as an example, with E set to 206,000 MPa and Poisson’s ratio set to 0.3. The modulus of elasticity of the stiffeners Es will change, as reflected by the stiffener-to-plate flexural stiffness ratio η according to Equation (13).

3.3. Load Properties

The combined action of bending and axial compression can be regarded as non-uniform compression action [10,28]. As illustrated in Figure 4, the stress distribution under the combined action of bending and axial compression is expressed by the non-uniform compression distribution factor ζ, as expressed by Equation (7). The factor ζ = 0.0 indicates uniform compression and ζ = 2.0 indicates pure bending. When the factor 0 < ζ < 2.0, it indicates the combined action of bending and axial compression with different proportions.
It should be noted that the value range of ζ is [0.0, 2.0], which can represent any combination of bending and axial compression. However, when the stiffened plate is subjected to pure shear action, the bending and axial compression do not exist, so ζ does not exist. In order to ensure the integrity and effectiveness of the dataset in machine learning, when zeta does not exist, it is assumed to be represented by −1.0.
The relationship between the bending and the axial compression can be greatly expressed by the factor ζ. Here, a factor ψ is introduced to express the relationship between shear and axial compression, as indicated in Equation (24). The compression-to-shear ratio ψ = −1.0 indicates compression and ψ = 1.0 indicates the pure shear action. The factor −1.0 < ψ < 1.0 indicates the combined action of shear and compression with different proportions.
ψ = τ σ max τ + σ max

4. Database

4.1. FE Setting and Verification

Compared with the stiffeners arranged on the tension diagonal, stiffeners arranged on the compression diagonal have a higher buckling load and have a more significant effect on the elastic buckling coefficient [14,29]. Therefore, the subsequent study mainly focused on the stiffened plate over the compression diagonal. Two types of models were used to study the four-edge simply supported plate with compression diagonal stiffener. All of the models were built in the finite element software ABAQUS. The corners of the plate were fixed to prevent movement and the out-of-plane displacements of the four edges were constrained to simulate simple support. Shear and non-uniform compression loads were applied at the edges of the plate.
The first type of stiffened plate model (called Type-1), which is simply supported diagonally, is shown in Figure 5a. Yonezawa [12] assumed that the stiffeners were rigid and could completely restrain the out-of-plane displacement of the diagonals. When the simply supported edges are used to simulate the rigid stiffeners, the results are called critical buckling coefficient kcr0. The plate is used shell element S4R, and S4R is suitable for analyzing the in-plane force of the thin plate and the out-of-plane deformation of the structure. Table 2 shows the comparison between the finite element results and Equations (1)–(9). For example, for a diagonally stiffened plate with dimensions of 3000 mm × 3000 mm × 10 mm, the shear buckling stress calculated by finite element analysis is 68 MPa and the shear buckling coefficient is 32.9. This result is consistent with the shear buckling coefficient of 32.9 calculated by Yonezawa’s Equation (8). The finite element results agree with the equation results, which shows the feasibility of the FE model.
The second type of model (called Type-2) is shown in Figure 5b, and it has closed cross-section stiffeners that are located diagonally on the plate. The dimensions of C-shaped stiffeners are 100 mm × 75 mm × 6 mm. The stiffeners were simulated by the S4R element. It is more elaborate than the Type-1 model. In this way, the flexural rigidity and torsional rigidity of the stiffeners in relation to the buckling coefficient kcr of the stiffened plate can be considered as well.

4.2. Datasets

Based on the model of Section 4.1, the finite element parameter analysis was carried out by changing the geometric size and other variables of the plate. The first type of model included 2136 sets of samples. The statistical characteristics of the dataset are displayed in Table 3. The aspect ratio γ, the non-uniform compression distribution factor ζ, and the compression-to-shear ratio ψ were the three input features of each sample. The second type of model contained 4198 sets of samples, as shown in Table 4. In addition to the above three features of the Type-1 model, the Type-2 model also includes the stiffener-to-plate flexural stiffness ratio η and the stiffener’s torsional-to-flexural stiffness ratio K, which has a total of five input features. Figure 6 displays the statistical summary of the input features contained in the database of Type-1 and Type-2 models.
Figure 7 shows the correlation matrix between the features and the target. A correlation value closer to 1.0 indicates a linearly positive correlation between the two features, closer to −1.0 indicates a linear negative correlation, and closer to 0.0 indicates either no linear correlation or a non-linear correlation. For the Type-1 model in Figure 7a, the correlation among these three input features was weak, and the correlation between aspect ratio γ and target kcr0 was strong with a negative correlation. For the Type-2 model in Figure 7b, the correlation coefficients of ζ and ψ were −0.63, indicating a strong correlation due to a large sample of pure shear or uniform compression in the dataset, where ζ or ψ are constant values −1.0 or 1.0. The aspect ratio γ correlates with kcr close to −0.5, indicating that the aspect ratio is negatively correlated with the buckling coefficient kcr.
The dataset was divided into 70% training sets and 30% test sets. All the features were standardized using Equation (25) to make the scales of features uniform. Then, ML algorithms were used to train the model, and the ten-fold cross-validation method (CV10) and random search hyperparameters were used to optimize the predicted model. The 10-fold cross-validation is based on the original training set and divided into new training and validation sets. The ML model was trained using multiple cycles of replacing the training and validation sets, and the results were averaged ten times. The optimal hyperparameters were selected to obtain better performance and generalization. There are two main methods for hyperparameter optimization, including grid search and random search. Among them, grid search aims to search all possible parameter ranges. It is an exhaustive search method, with search time increasing exponentially related to the number of parameters. The random search method takes random sampling in the appropriate parameter space and gradually approaches the optimal parameters.
x = x μ σ

5. Prediction Results from ML Models

All established ML models were implemented in Python. Various machine learning models were trained in the same training set. The developed ML models were obtained by using a 10-fold cross-validation method and random hyperparameter search. Table 5 contains the hyperparameters of each ML algorithm. Figure 8 and Figure 9 demonstrate the ML model’s predictions for Type-1 and Type-2, and compare the prediction values with the finite element results on the training and test datasets.
In Figure 8 and Figure 9, if the relationship between predicted values and actual values is close to the line y = x, and this indicates that the error between the predicted results and the actual values is small. The majority of developed ML models can produce accurate predictions for all data sets. However, when compared to other ML models, K-NN cannot predict the results accurately. K-NN performed well in the training set but poorly in the test set, suggesting that it is overfitted.
Figure 10, Figure 11, Figure 12 and Figure 13 show the error frequency diagrams and error histogram distribution for each machine-learning model. Different machine learning algorithms have different distributions of target predictions. DT and KNN suffered from overfitting in the Type-1 training set and had large errors in the prediction results in the test set. ANN had larger errors in the Type-2 dataset. In contrast, integrated learning algorithms such as XGBoost had smaller and more uniform errors in both the training and test sets.
Figure 14 illustrates the U95 for the different models, a parameter of the uncertainty analysis, indicating a 95 percent confidence level [21]. In Figure 14a, DT and K-NN model’s U95-train = 0, but U95-test = 5.88 and U95-test = 20.05, respectively. The U95 of the other models ranged from 0 to 10, among which the U95-train = 0.46 of AdaBoost and U95-test = 2.86 of XGBoost are the lowest. In Figure 14b, U95-test = 108.98 of the K-NN model is significantly larger than others. The U95 of the other models was under 20, among which the U95-train = 2.55 of AdaBoost and U95-test = 11.18 of XGBoost were the lowest.
Table 6 and Table 7 provide evaluation metrics for different machine learning models on the training set and test set. For kcr0 in Table 6, most of the ML models showed very high accuracy, with a CV10 greater than 0.97. However, the performance of K-NN was worse than other models, and its CV10 was only 0.817. XGBoost and ANN (with hidden_layer_sizes 7,7,8,1) were the two models with the best performance metrics and generalization, with CV10 of 0.997 and 0.996 on the training set, respectively. Especially, in the predicted values of the XGBoost model, RMSE was 1.323, MAE was 0.343, MAPE was 0.013%, and the adjusted R2 was 0.997.
For kcr in Table 7, K-NN was still the worst-performing model, while other ML models had a CV10 of more than 0.99. The XGBoost model also showed the best performance, with a score of 0.998 for CV10 and 0.997 for adjusted R2. The performance of different ML algorithms varied on different datasets. The k-NN algorithm depends more on data and is more sensitive to feature information than other algorithms because it mainly depends on a limited number of surrounding samples. Additionally, the imbalance of samples can have a significant effect on the performance of K-NN. In the K-NN regression model, the mean method or weighted mean method is mainly used. For example, if the dataset was divided into two datasets greater than 1.0 and less than 1.0 according to the feature of aspect ratio, the accuracy of K-NN on these two datasets improved significantly, and the score of CV10 was 0.91, but it was still lower than other ML models.
The performance metrics of the ensemble learning models with Bagging (such as RF) and Boosting (such as AdaBoost, LightGBM, XGBoost, and CatBoost) were better than that of single learners (such as DT and ANN). This is also the advantage of comprehensive learning. When base learners or weak learners are combined correctly, it is possible to get a more accurate and robust learner. On the other hand, in terms of the sample size of the dataset, the total sample sizes of type 1 and type 2 models were 2136 and 4198, respectively. The performance metrics of data sets with larger sample sizes are usually better, which indicates that a large amount of appropriate sample information is conducive to the higher prediction accuracy of the ML model. As can be seen from the ranking, XGBoost had the highest combined ranking (lower is best) in the training and testing sets of Type-1 and Type-2. In particular, the XGBoost model showed the best generalization performance in the test set.
The Taylor diagrams are shown in Figure 15. Where the scatter points represent the different machine learning models, the horizontal and vertical axes represent the standard deviations, the radial lines represent the correlation coefficients, and the green dashed lines represent the root mean square differences. For the training set of Type-1 and Type-2, all ML models were more concentrated and showed good accuracy with correlation coefficients ranging from 0.99 to 1.0. Except for K-NN, the other models also had good performance in the test set. Although K-NN had very high prediction accuracy in the training set, it had poor performance in the test set, and the generalization ability of this model was significantly different from the other models.

6. ML Model Explanations

Machine learning has high accuracy and precision in regression and classification tasks. However, the machine learning process is often difficult to explain and does not provide a prediction mechanism or reason, which is referred to as a “black box”. This makes it difficult for researchers to analyze how the features of a certain sample affect the final output. If a relationship between features and predicted values can be established, it would improve the ML model’s credibility.
The SHAP method, derived from coalitional game theory, is a way to allocate expenses to participants based on players’ contributions to overall expenditures. SHAP is an additive interpretation model inspired by the Shapley value. The SHAP value has the advantage of not only reflecting the influence of the features in each sample but also showing the positive and negative influence of the features. When used in machine learning, the SHAP value represents each feature’s contribution to the predicted value.
According to the performance evaluation results of developed machine learning in Section 5, XGBoost is the ML model with the best R2 and CV10 among the eight machine learning algorithms and showed the best learning and prediction ability on the data sets of the two types of models. Based on the results of the XGBoost model, this section uses the SHAP method to explain the correlation between input features and output values.

6.1. Global Explanations

Figure 16 illustrates the SHAP value of global feature importance, where the importance of each feature is regarded as the mean absolute value of that feature on all given samples. Among the three features in Figure 16a, the effect of aspect ratio γ on the critical buckling coefficient of the diagonally stiffened plate was the most important, which was expected. Its SHAP value was significantly higher than the non-uniform compression distribution factor ζ and the compression-to-shear ratio ψ. The Type-2 model has two additional features than the Type-1 model, and the two features are flexural rigidity and torsional rigidity of stiffeners. Among these five features of the Type-2 model, the aspect ratio γ is still the most important feature affecting the critical buckling coefficient of the structure. Its SHAP value was significantly higher than that of the other four features. The stiffener’s torsional-to-flexural stiffness ratio K was the lowest SHAP importance value among the five features.
Each point on the SHAP summary plots represents the SHAP value of the sample in Figure 17. The color of each dot displayed from low (blue) to high (red) represents the corresponding feature value from small to large. The SHAP value represents the positive and negative correlation between the feature and the output. For example, a low γ in blue has a high positive SHAP value in Figure 17a, whereas a high γ in red has a negative SHAP value. The trend of ζ is opposite to that of γ, indicating that axial compression will reduce the buckling coefficient and bending will increase the buckling coefficient. In Figure 17b, the higher the η and K values are, the greater the positive SHAP values. The increased stiffness of the stiffener will indeed increase the buckling load of the diagonally stiffened plate.

6.2. Feature Dependence

Figure 18 and Figure 19 are the SHAP dependence plots. These figures make it easy to see the interaction of the two features on the SHAP values. The primary feature’s values are given on the X-axis and the corresponding SHAP values are displayed on the left Y-axis. In addition, the interaction between the primary feature and the secondary feature is displayed in color on the right Y-axis. The blue dot denotes the low secondary feature value and the red dot denotes the high feature value. It is worth noting that the SHAP values of the samples with the same feature value on the X-axis are not the same. The reason is that this feature interacts with other features. Moreover, the feature dependence plot just explains how the ML model works, not necessarily how its reality works.
In Figure 18a, there is a negative correlation between γ and the SHAP value. The SHAP value was positive when γ less than 1.0, and the smaller the γ, the higher the SHAP value, which means that the positive contribution is greater. On the contrary, the SHAP value was negative when γ exceeded 1.0. The SHAP value decreased with the increase in γ, indicating that the negative contribution is larger. Increasing the aspect ratio will reduce the buckling coefficient of the diagonally stiffened plate.
It was assumed that the value of ζ is −1.0 when it does not exist in Section 3.3. Therefore, in the case of excluding ζ = −1.0, the relationship between ζ and the SHAP value is a positive correlation in Figure 18b. It means that the SHAP value increases with the increase of ζ. It can be seen that the SHAP value was lower than 0 when ζ is less than 1.0, indicating a negative contribution. In other words, when the uniform compressive stress was greater than the bending, the buckling coefficient of the stiffened plate decreased. This means that axial compression is disadvantageous to the buckling coefficient, while bending is beneficial to the buckling coefficient.
In Figure 18c, the relationship between ψ and the SHAP value is U-shaped. The SHAP value decreased at first and then increased with an increase of ψ. It changed from positive values to negative values when ψ exceeded −0.5, and it had a negative maximum contribution when ψ was 0.0. Moreover, the color of the right Y-axis indicates that the SHAP value corresponding to ψ is low when the aspect ratio γ is high values (red), and has a large positive or negative SHAP value when the aspect ratio γ is low values (blue).
The first three features’ (γ, ζ, and ψ) dependence graphs in Figure 19 are the same as those in Figure 18. In addition, the trend of the feature dependence graphs of η was the same as that of K, and their SHAP value increased with the increase of feature values. When η was greater than 20, its SHAP value was positive, as illustrated in Figure 19d. Similarly, the SHAP value of K was positive when K was greater than 0.4, in Figure 19e. These two features show the contribution of flexural rigidity and torsional rigidity of stiffeners to the buckling coefficient. To improve the buckling coefficient of a compression diagonally stiffened plate under the combined action of compression–bending–shear, it is recommended that K not be lower than 0.4, and η not be lower than 20.

6.3. Individual Explanation

Figure 20 displays the explanation of individual sample prediction. Two samples are used as examples: one from Type-1 and the other from Type-2. Figure 20a shows a sample with the input features γ = 1.0, ζ = −1.0, and ψ = 1.0. Figure 20b shows a sample with the input features γ = 0.6, ζ = 0.0, ψ = 0.818, η = 40.0, and K = 0.24. Features with red arrows will enhance prediction value, whereas blue arrows will decrease prediction value. The base value is the mean prediction value obtained from the ML model on the training set. If there is no information available for the input features, the base value is the output value. For the first sample in Figure 20a, the base value was 18.17 and the predicted buckling coefficient was 30.65. Feature ψ = 1.0 had the most impact on increasing the buckling coefficient with a SHAP value of 10.40, followed by γ = 1.0 with a SHAP value of 2.96. However, ζ = −1.0 had a negative contribution with a SHAP value of −0.88. For this sample, the final prediction result kcr0 was derived from the base value and the sum of the SHAP values of the three input features. For the second sample in Figure 20b, the basic value was 58.40, ψ = 0.818, and η = 40.0 increased by 13.73 and 6.00, respectively. While γ = 0.6, ζ = 0.0, and K = 0.24 decreased −8.56, −7.59, and −4.51, respectively. The predicted buckling coefficient kcr of this sample is 57.47.

7. Conclusions

Under the combined compression, bending, and shear loads, the buckling mechanism of diagonally stiffened plates is complex. To solve the problem of calculating the elastic buckling coefficient of the diagonally stiffened plate, data-driven machine-learning models were proposed. With and without considering the flexure and torsional rigidity of stiffeners, prediction models for predicting the diagonally stiffened plates’ elastic buckling coefficient under combined compression–bending–shear action were developed by using the ML algorithms in Python, including the decision tree (DT), random forest (RF), k-nearest neighbor (K-NN), artificial neural network (ANN), adaptive boosting (AdaBoost), light gradient boosting machine (LightGBM), extreme gradient boosting (XGBoost), and categorical boosting (CatBoost). The main conclusions are as follows:
(1) The ML models were developed using a 10-fold cross-validation method and random search for hyperparameters selection. These models demonstrated excellent accuracy on both the training and test sets. In particular, an ensemble learning algorithm (bagging and boosting) performed significantly better than a single learner (DT). ANN also performed well after selecting an appropriate hidden layer and the number of neurons. However, due to its algorithmic characteristics, the performance of the K-NN algorithm on the test set was poorer than that of other machine learning algorithms, resulting in overfitting. Among all models, XGBoost was found to be the best, with CV10 as high as 0.996 and 0.997 on two different datasets;
(2) The XGBoost model was further analyzed using the SHAP method to explain the correlation between the features and the prediction target. The effects of the main features on the buckling coefficient of diagonally stiffened plates under combined action were ranked as follows: the plate’s aspect ratio γ, the compression-to-shear ratio ψ, the non-uniform compression distribution factor ζ, the stiffener-to-plate flexural stiffness ratio η and the stiffener’s torsional-to-flexural stiffness ratio K. The correlation between these features and the prediction results was found to be inconsistent. There was a negative correlation between γ and the buckling coefficient, indicating that as γ increases, the buckling coefficient decreases. In contrast, ζ, η, and K were positively correlated with the buckling coefficient, meaning that increasing these values can increase the buckling coefficient. The correlation between ψ and the buckling coefficient was found to be U-shaped;
(3) Furthermore, when the plate’s aspect ratio γ is small, the other features can reach the maximum and minimum SHAP values under the interaction of γ. This indicates that their positive or negative contribution to the buckling coefficient of the diagonally stiffened plate is the largest. However, when γ was larger, the SHAP values of other features did not obviously change, meaning that their contribution to the buckling coefficient is not obvious;
(4) The flexural and torsional rigidity of stiffeners have a certain influence on the buckling coefficient of diagonally stiffened plates. Increasing these values can improve the buckling coefficient. Based on machine learning and SHAP interpretation analysis, suggested values for the flexural and torsional rigidity of stiffeners can be provided. It is recommended that the stiffener-to-plate flexural stiffness ratio η be no less than 20 and that the stiffener’s torsional-to-flexural stiffness ratio K be no less than 0.4. This will be beneficial for improving the buckling capacity of diagonally stiffened plates.

Author Contributions

Conceptualization, Y.Y.; data curation, Y.Y.; formal analysis, Y.Y.; funding acquisition, Y.Y. and X.G.; methodology, Y.Y. and X.G.; project administration, Y.Y. and Z.M.; software, Y.Y.; supervision, Z.M.; writing—original draft, Y.Y.; writing—review editing, Z.M. and X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by FUNDAMENTAL RESEARCH FUNDS FOR THE CENTRAL UNIVERSITIES, grant number FRF-TP-22-117A1; NATURAL SCIENCE FOUNDATION OF HEBEI PROVINCE, grant number E2021202111.

Data Availability Statement

All the data supporting the results were provided within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lima, J.P.S.; Cunha, M.L.; dos Santos, E.D.; Rocha, L.A.O.; Real, M.D.V.; Isoldi, L.A. Constructal Design for the ultimate buckling stress improvement of stiffened plates submitted to uniaxial compressive load. Eng. Struct. 2020, 203, 109883. [Google Scholar] [CrossRef]
  2. Zhou, W.; Li, Y.; Shi, Z.; Lin, J. An analytical solution for elastic buckling analysis of stiffened panel subjected to pure bending. Int. J. Mech. Sci. 2019, 161–162, 105024. [Google Scholar] [CrossRef]
  3. Deng, J.; Wang, X.; Yuan, Z.; Zhou, G. Novel quadrature element formulation for simultaneous local and global buckling analysis of eccentrically stiffened plates. Aerosp. Sci. Technol. 2019, 87, 154–166. [Google Scholar] [CrossRef]
  4. Kimura, Y.; Fujak, S.M.; Suzuki, A. Elastic local buckling strength of I-beam cantilevers subjected to bending moment and shear force based on flange–web interaction. Thin-Walled Struct. 2021, 162, 107633. [Google Scholar] [CrossRef]
  5. Biscaya, A.; Pedro, J.J.O.; Kuhlmann, U. Experimental behaviour of longitudinally stiffened steel plate girders under combined bending, shear and compression. Eng. Struct. 2021, 238, 112139. [Google Scholar] [CrossRef]
  6. Chen, X.; Real, E.; Yuan, H.; Du, X. Design of welded stainless steel I-shaped members subjected to shear. Thin-Walled Struct. 2020, 146, 106465. [Google Scholar] [CrossRef]
  7. Jáger, B.; Kövesdi, B.; Dunai, L. Bending and shear buckling interaction behaviour of I-girders with longitudinally stiffened webs. J. Constr. Steel Res. 2018, 145, 504–517. [Google Scholar] [CrossRef]
  8. Prato, A.; Al-Saymaree, M.; Featherston, C.; Kennedy, D. Buckling and post-buckling of thin-walled stiffened panels: Modelling imperfections and joints. Thin-Walled Struct. 2022, 172, 108938. [Google Scholar] [CrossRef]
  9. Wu, Y.; Fan, S.; Zhou, H.; Guo, Y.; Wu, Q. Cyclic behaviour of diagonally stiffened stainless steel plate shear walls with two-side connections: Experiment, simulation and design. Eng. Struct. 2022, 268, 114756. [Google Scholar] [CrossRef]
  10. Timoshenko, S.P.; Gere, J.M. Theory of Elastic Stability, 2nd ed.; McGraw-Hill Book Company, Inc.: New York, NY, USA, 1961. [Google Scholar]
  11. Mikami, I.; Matsushita, S.; Nakahara, H.; Yonezawa, H. Buckling of plate girder webs with diagonal stiffener. Proc. Jpn. Soc. Civ. Eng. 1971, 1971, 45–54. [Google Scholar] [CrossRef]
  12. Yonezawa, H.; Mikami, I.; Dogaki, M.; Uno, H. Shear strength of plate girders with diagonally stiffened webs. Proc. Jpn. Soc. Civ. Eng. 1978, 269, 17–27. [Google Scholar] [CrossRef]
  13. Yuan, H.; Chen, X.; Theofanous, M.; Wu, Y.; Cao, T.; Du, X. Shear behaviour and design of diagonally stiffened stainless steel plate girders. J. Constr. Steel Res. 2019, 153, 588–602. [Google Scholar] [CrossRef]
  14. Martins, J.; Cardoso, H. Elastic shear buckling coefficients for diagonally stiffened webs. Thin-Walled Struct. 2022, 171, 108657. [Google Scholar] [CrossRef]
  15. Zhang, H.; Cheng, X.; Li, Y.; Du, X. Prediction of failure modes, strength, and deformation capacity of RC shear walls through machine learning. J. Build. Eng. 2022, 50, 104145. [Google Scholar] [CrossRef]
  16. Bin Kabir, A.; Hasan, A.S.; Billah, A.M. Failure mode identification of column base plate connection using data-driven machine learning techniques. Eng. Struct. 2021, 240, 112389. [Google Scholar] [CrossRef]
  17. Thai, H.-T. Machine learning for structural engineering: A state-of-the-art review. Structures 2022, 38, 448–491. [Google Scholar] [CrossRef]
  18. Truong, G.T.; Hwang, H.-J.; Kim, C.-S. Assessment of punching shear strength of FRP-RC slab-column connections using machine learning algorithms. Eng. Struct. 2022, 255, 113898. [Google Scholar] [CrossRef]
  19. Vu, Q.-V.; Truong, V.-H.; Thai, H.-T. Machine learning-based prediction of CFST columns using gradient tree boosting algorithm. Compos. Struct. 2020, 259, 113505. [Google Scholar] [CrossRef]
  20. Degtyarev, V.V.; Tsavdaridis, K.D. Buckling and ultimate load prediction models for perforated steel beams using machine learning algorithms. J. Build. Eng. 2022, 51, 104316. [Google Scholar] [CrossRef]
  21. Shi, X.; Yu, X.; Esmaeili-Falak, M. Improved arithmetic optimization algorithm and its application to carbon fiber reinforced polymer-steel bond strength estimation. Compos. Struct. 2023, 306, 116599. [Google Scholar] [CrossRef]
  22. Zhu, Y.R.; Huang, L.H.; Zhang, Z.J.; Behzad, B. Estimation of splitting tensile strength of modified recycled aggregate concrete using hybrid algorithms. Steel Compos. Struct. 2022, 44, 389–406. [Google Scholar]
  23. Esmaeili-Falak, M.; Benemaran, R.S. Ensemble deep learning-based models to predict the resilient modulus of modified base materials subjected to wet-dry cycles. Geomech. Eng. 2023, 32, 583–600. [Google Scholar]
  24. Wang, C.; Song, L.-H.; Fan, J.-S. End-to-End Structural analysis in civil engineering based on deep learning. Autom. Constr. 2022, 138, 104255. [Google Scholar] [CrossRef]
  25. Lundberg, S.; Lee, S. A unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4765–4774. [Google Scholar]
  26. Mangalathu, S.; Hwang, S.-H.; Jeon, J.-S. Failure mode and effects analysis of RC members based on machine-learning-based SHapley Additive exPlanations (SHAP) approach. Eng. Struct. 2020, 219, 110927. [Google Scholar] [CrossRef]
  27. Bleich, F. Buckling Strength of Metal Structures; Engineering Societies Monographs; MacGraw-Hill: New York, NY, USA, 1952. [Google Scholar]
  28. Tong, G. Out-of-Plane Stability of Steel Structures; Architecture & Building Press: Beijing, China, 2007. (In Chinese) [Google Scholar]
  29. Yang, Y.; Mu, Z.; Zhu, B. Numerical Study on Elastic Buckling Behavior of Diagonally Stiffened Steel Plate Walls under Combined Shear and Non-Uniform Compression. Metals 2022, 12, 600. [Google Scholar] [CrossRef]
  30. Adeli, H. Neural Networks in Civil Engineering: 1989–2000. Comput.-Aided Civ. Infrastruct. Eng. 2001, 16, 126–142. [Google Scholar] [CrossRef]
  31. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  32. Freund, Y.; Schapire, R.E. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
  33. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 3149–3157. [Google Scholar]
  34. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  35. Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; pp. 6639–6649. [Google Scholar]
  36. Feng, D.-C.; Wang, W.-J.; Mangalathu, S.; Taciroglu, E. Interpretable XGBoost-SHAP Machine-Learning Model for Shear Strength Prediction of Squat RC Walls. J. Struct. Eng. 2021, 147, 4021173. [Google Scholar] [CrossRef]
Figure 1. Diagonally stiffened plate with closed cross-section stiffeners.
Figure 1. Diagonally stiffened plate with closed cross-section stiffeners.
Sustainability 15 07815 g001
Figure 2. Variation of shear buckling coefficient and buckling mode with η for a diagonally stiffened plate [29]. (a) curve of the compressive type; (b) curve of the tensile type; (c) buckling mode of the compressive type; (d) buckling mode of the tensile type.
Figure 2. Variation of shear buckling coefficient and buckling mode with η for a diagonally stiffened plate [29]. (a) curve of the compressive type; (b) curve of the tensile type; (c) buckling mode of the compressive type; (d) buckling mode of the tensile type.
Sustainability 15 07815 g002
Figure 3. Machine learning architecture.
Figure 3. Machine learning architecture.
Sustainability 15 07815 g003
Figure 4. Shear and non-uniform compression.
Figure 4. Shear and non-uniform compression.
Sustainability 15 07815 g004
Figure 5. FE model. (a) Type-1 model; (b) Type-2 model.
Figure 5. FE model. (a) Type-1 model; (b) Type-2 model.
Sustainability 15 07815 g005
Figure 6. Distributions of variables of the dataset. (a) Distributions of variables of the Type-1 training set; (b) Distributions of variables of the Type-1 test set; (c) Distributions of variables of the Type-2 training set; (d) Distributions of variables of the Type-2 test set.
Figure 6. Distributions of variables of the dataset. (a) Distributions of variables of the Type-1 training set; (b) Distributions of variables of the Type-1 test set; (c) Distributions of variables of the Type-2 training set; (d) Distributions of variables of the Type-2 test set.
Sustainability 15 07815 g006
Figure 7. Correlation matrix heat map of features for the buckling coefficient dataset. (a) Correlation matrix of the Type-1 dataset; (b) Correlation matrix of the Type-2 dataset.
Figure 7. Correlation matrix heat map of features for the buckling coefficient dataset. (a) Correlation matrix of the Type-1 dataset; (b) Correlation matrix of the Type-2 dataset.
Sustainability 15 07815 g007
Figure 8. Performance of ML models for predicting kcr0.
Figure 8. Performance of ML models for predicting kcr0.
Sustainability 15 07815 g008
Figure 9. Performance of ML models for predicting kcr.
Figure 9. Performance of ML models for predicting kcr.
Sustainability 15 07815 g009
Figure 10. Error histogram distribution for each ML model for kcr0.
Figure 10. Error histogram distribution for each ML model for kcr0.
Sustainability 15 07815 g010
Figure 11. Frequency diagram for each ML model for kcr0.
Figure 11. Frequency diagram for each ML model for kcr0.
Sustainability 15 07815 g011
Figure 12. Error histogram distribution for each ML model for kcr.
Figure 12. Error histogram distribution for each ML model for kcr.
Sustainability 15 07815 g012aSustainability 15 07815 g012b
Figure 13. Frequency diagram for each ML model for kcr.
Figure 13. Frequency diagram for each ML model for kcr.
Sustainability 15 07815 g013aSustainability 15 07815 g013b
Figure 14. The U95 of machine learning models for kcr0 and kcr. (a) U95 of models for the Type-1 dataset; (b) U95 of models for the Type-2 dataset.
Figure 14. The U95 of machine learning models for kcr0 and kcr. (a) U95 of models for the Type-1 dataset; (b) U95 of models for the Type-2 dataset.
Sustainability 15 07815 g014
Figure 15. Taylor diagram of training and testing data for kcr0 and kcr. (a) Taylor diagram of training data for kcr0; (b) Taylor diagram of testing data for kcr0; (c) Taylor diagram of training data for kcr; (d) Taylor diagram of testing data for kcr.
Figure 15. Taylor diagram of training and testing data for kcr0 and kcr. (a) Taylor diagram of training data for kcr0; (b) Taylor diagram of testing data for kcr0; (c) Taylor diagram of training data for kcr; (d) Taylor diagram of testing data for kcr.
Sustainability 15 07815 g015
Figure 16. SHAP feature importance. (a) Type-1 model; (b) Type-2 model.
Figure 16. SHAP feature importance. (a) Type-1 model; (b) Type-2 model.
Sustainability 15 07815 g016
Figure 17. Distribution of Shap values for XGBoost models. (a) Type-1 model; (b) Type-2 model.
Figure 17. Distribution of Shap values for XGBoost models. (a) Type-1 model; (b) Type-2 model.
Sustainability 15 07815 g017
Figure 18. Feature dependence for the Type-1 model.
Figure 18. Feature dependence for the Type-1 model.
Sustainability 15 07815 g018
Figure 19. Feature dependence for the Type-2 model.
Figure 19. Feature dependence for the Type-2 model.
Sustainability 15 07815 g019
Figure 20. Individual explanation by SHAP. (a) one simple Type-1 model; (b) one simple Type-2 model.
Figure 20. Individual explanation by SHAP. (a) one simple Type-1 model; (b) one simple Type-2 model.
Sustainability 15 07815 g020
Table 1. Evaluation metrics for machine learning models.
Table 1. Evaluation metrics for machine learning models.
MetricsFormulaEquation No.
Root mean squared error (RMSE) R M S E = 1 n i = 1 n ( y i y i ^ ) 2 (15)Lower is best
Mean absolute error (MAE) M A E = 1 n i = 1 n | y i y i ^ | (16)Lower is best
Mean absolute percentage error (MAPE) M A P E = 100 % n i = 1 n y i y i ^ y i (17)Lower is best
Coefficient of determination (R2) R 2 = 1 i = 1 n ( y i y i ^ ) 2 i = 1 n ( y i y ¯ ) 2 (18)Higher is best
adjusted R2 A d j . R 2 = 1 ( 1 R 2 ) n 1 n p 1 (19)Higher is best
Performance index (PI) P I = 1 y ¯ R M S E R 2 + 1 (20)Lower is best
Variance account for (VAF) V A F = 1 var ( y i y i ^ ) var ( y i ) × 100 (21)Higher is best
Scatter index (SI) S I = R M S E y ¯ (22)Lower is best
where, n is the number of samples, y i is the target value, and y i ^ is the predicted value.
Table 2. Verification of finite element results.
Table 2. Verification of finite element results.
Force ConditionsTypeζψσFEA
/MPa
σTH
/MPa
σTH/σFEA
ShearUnstiffened−1.01.019.3019.321.001
Axial compressionUnstiffened0.0−1.08.128.271.018
Non-uniform compressionUnstiffened1.0−1.016.0516.121.004
BendingUnstiffened2.0−1.052.8352.901.001
Combined shear and non-uniform compressionUnstiffened1.0−1.015.9916.161.011
Combined shear and non-uniform compressionUnstiffened1.0−0.3314.3714.431.004
Shear (compression)Diagonally stiffened−1.01.063.668.01.069
Shear (tension)Diagonally stiffened−1.01.023.523.81.013
BendingDiagonally stiffened2.0−1.064.563.10.978
Table 3. Statistical characteristics of features for the Type-1 dataset.
Table 3. Statistical characteristics of features for the Type-1 dataset.
Input FeatureMinMaxMeanStandard Deviation
Aspect ratio γ0.3004.0002.1601.088
Non-uniform compression distribution factor ζ−1.0002.0000.9600.749
Compression-to-shear ratio ψ−1.0001.000−0.0760.552
Elastic buckling coefficient kcr03.422285.25717.76624.839
Table 4. Statistical characteristics of features for the Type-2 dataset.
Table 4. Statistical characteristics of features for the Type-2 dataset.
Input FeatureMinMaxMeanStandard Deviation
Aspect ratio γ0.2005.0000.9950.730
Non-uniform compression distribution factor ζ−1.0002.0000.4161.078
Compression-to-shear ratio ψ−1.0001.0000.0470.793
Stiffener-to-plate flexural stiffness ratio η1.00070.00027.58820.542
Stiffener’s torsional-to-flexural stiffness ratio K0.1920.8790.4510.247
Elastic buckling coefficient kcr2.013676.31358.55180.428
Table 5. Hyperparameter optimization of machine learning models.
Table 5. Hyperparameter optimization of machine learning models.
ML AlgorithmHyperparameterValue
Type-1Type-2
DTmin_samples_split22
max_depth1818
K-NNn_neighbors416
leaf_size3731
algorithmbruteball_tree
p21
RFn_estimators136136
max_depth100100
AdaBoostn_estimators110110
learning_rate0.60.6
losssquaresquare
XGBoostn_estimators131131
max_depth77
learning_rate0.10.1
min_child_weight22
subsample0.60.6
LightGBMn_estimators167110
max_depth46
learning_rate0.150.19
subsample0.3520.226
CatBoostn_estimators139139
max_depth55
learning_rate0.160.16
subsample0.2260.226
ANNhidden_layer_sizes6,7,1/7,7,8,16,7,1/7,8,5,1
solverlbfgslbfgs
activationrelurelu
Table 6. Performance metrics of ML models for kcr0.
Table 6. Performance metrics of ML models for kcr0.
ML AlgorithmsRMSE
(Rank)
MAE
(Rank)
MAPE/%
(Rank)
R2
(Rank)
Adj. R2
(Rank)
PI
(Rank)
VAF
(Rank)
SI
(Rank)
CV10
(Rank)
Total Rank
Training data
DT0.000 (1)0.000 (1)0.000 (1)1.000 (1)1.000 (1)0.000 (1)100.000 (1)0.000 (1)0.976 (7)15
K-NN0.000 (1)0.000 (1)0.000 (1)1.000 (1)1.000 (1)0.000 (1)100.000 (1)0.000 (1)0.817 (9)17
RF1.104 (7)0.315 (6)0.010 (4)0.998 (7)0.998 (7)0.043 (7)99.816 (7)0.061 (7)0.989 (4)56
AdaBoost0.196 (3)0.097 (3)0.012 (5)1.000 (1)1.000 (1)0.007 (3)99.995 (3)0.010 (3)0.982 (6)28
XGBoost0.269 (4)0.122 (4)0.008 (3)1.000 (1)1.000 (1)0.011 (4)99.989 (4)0.015 (4)0.997 (1)26
LightGBM3.183 (9)0.980 (8)0.044 (8)0.985 (9)0.985 (9)0.130 (9)98.368 (9)0.182 (9)0.971 (8)78
CatBoost0.520 (5)0.302 (5)0.021 (6)1.000 (1)1.000 (1)0.020 (5)99.959 (5)0.029 (5)0.997 (1)34
ANN (6,7,1)2.026 (8)1.051 (9)0.069 (9)0.994 (8)0.994 (8)0.080 (8)99.377 (8)0.112 (8)0.987 (5)71
ANN (7,7,8,1)0.955 (6)0.602 (7)0.043 (7)0.999 (6)0.999 (6)0.038 (6)99.862 (6)0.053 (6)0.996 (3)53
Testing data
DT2.733 (6)1.001 (6)0.040 (4)0.986 (5)0.985 (6)0.110 (6)98.631 (6)0.155 (6)45
K-NN9.272 (9)2.584 (9)0.055 (8)0.834 (9)0.833 (9)0.414 (9)83.419 (9)0.540 (9)71
RF2.757 (7)0.803 (4)0.026 (3)0.985 (7)0.985 (6)0.114 (7)98.529 (7)0.160 (7)48
AdaBoost2.665 (5)0.951 (5)0.041 (5)0.986 (5)0.986 (5)0.109 (5)98.671 (5)0.153 (5)40
XGBoost1.323 (1)0.343 (1)0.013 (1)0.997 (1)0.997 (1)0.055 (1)99.664 (1)0.077 (1)8
LightGBM4.332 (8)1.197 (8)0.048 (7)0.964 (8)0.964 (8)0.149 (8)97.540 (8)0.208 (8)63
CatBoost1.573 (3)0.468 (2)0.024 (2)0.995 (3)0.995 (3)0.065 (3)99.525 (3)0.092 (3)22
ANN (6,7,1)2.603 (4)1.176 (7)0.067 (9)0.987 (4)0.987 (4)0.108 (4)98.689 (4)0.151 (4)40
ANN (7,7,8,1)1.394 (2)0.725 (3)0.045 (6)0.996 (2)0.996 (2)0.057 (2)99.624 (2)0.081 (2)21
Table 7. Performance metrics of ML models for kcr.
Table 7. Performance metrics of ML models for kcr.
ML AlgorithmRMSE
(Rank)
MAE
(Rank)
MAPE/%
(Rank)
R2
(Rank)
Adj. R2
(Rank)
PI
(Rank)
VAF
(Rank)
SI
(Rank)
CV10
(Rank)
Total Rank
Training data
DT0.656 (2)0.141 (2)0.003 (2)1.000 (1)1.000 (1)0.008 (2)99.993 (2)0.011 (2)0.984 (8)22
K-NN0.531 (1)0.089 (1)0.002 (1)1.000 (1)1.000 (1)0.006 (1)99.995 (1)0.009 (1)0.721 (9)17
RF2.580 (5)1.103 (5)0.019 (5)0.999 (5)0.999 (5)0.031 (5)99.893 (5)0.044 (5)0.993 (5)45
AdaBoost0.922 (3)0.369 (3)0.016 (3)1.000 (1)1.000 (1)0.011 (3)99.987 (3)0.015 (3)0.991 (6)26
XGBoost1.204 (4)0.722 (4)0.018 (4)1.000 (1)1.000 (1)0.015 (4)99.977 (4)0.021 (4)0.998 (1)27
LightGBM2.730 (6)1.524 (6)0.035 (6)0.999 (5)0.999 (5)0.039 (7)99.839 (7)0.055 (7)0.996 (3)52
CatBoost2.797 (7)1.776 (7)0.044 (7)0.999 (5)0.999 (5)0.035 (6)99.869 (6)0.049 (6)0.997 (2)51
ANN (6,7,1)7.636 (9)5.33 (9)0.204 (9)0.991 (9)0.991 (9)0.093 (9)99.061 (9)0.132 (9)0.990 (7)79
ANN (7,8,5,1)4.753 (8)3.007 (8)0.087 (8)0.996 (8)0.996 (8)0.058 (8)99.636 (8)0.082 (8)0.994 (4)68
Testing data
DT9.422 (8)3.668 (7)0.061 (6)0.987 (8)0.987 (8)0.111 (8)98.757 (8)0.157 (8)61
K-NN46.564 (9)16.779 (9)0.224 (9)0.693 (9)0.692 (9)0.64 (9)69.783 (9)0.779 (9)72
RF6.536 (5)2.714 (4)0.044 (2)0.994 (5)0.994 (5)0.077 (5)99.401 (5)0.109 (5)36
AdaBoost7.164 (6)2.880 (5)0.055 (5)0.993 (6)0.993 (6)0.086 (6)99.263 (6)0.121 (6)46
XGBoost4.780 (1)1.866 (1)0.031 (1)0.997 (1)0.997 (1)0.056 (1)99.679 (1)0.080 (1)8
LightGBM5.508 (3)2.273 (2)0.044 (2)0.996 (2)0.996 (2)0.068 (3)99.53 (3)0.096 (3)20
CatBoost4.992 (2)2.359 (3)0.050 (4)0.996 (2)0.996 (2)0.059 (2)99.646 (2)0.084 (2)19
ANN (6,7,1)8.530 (7)5.791 (8)0.222 (8)0.990 (7)0.99 (7)0.101 (7)98.971 (7)0.143 (7)58
ANN (7,8,5,1)5.997 (4)3.459 (6)0.097 (7)0.995 (4)0.995 (4)0.071 (4)99.491 (4)0.100 (4)37
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Mu, Z.; Ge, X. Machine Learning-Based Prediction of Elastic Buckling Coefficients on Diagonally Stiffened Plate Subjected to Shear, Bending, and Compression. Sustainability 2023, 15, 7815. https://doi.org/10.3390/su15107815

AMA Style

Yang Y, Mu Z, Ge X. Machine Learning-Based Prediction of Elastic Buckling Coefficients on Diagonally Stiffened Plate Subjected to Shear, Bending, and Compression. Sustainability. 2023; 15(10):7815. https://doi.org/10.3390/su15107815

Chicago/Turabian Style

Yang, Yuqing, Zaigen Mu, and Xiao Ge. 2023. "Machine Learning-Based Prediction of Elastic Buckling Coefficients on Diagonally Stiffened Plate Subjected to Shear, Bending, and Compression" Sustainability 15, no. 10: 7815. https://doi.org/10.3390/su15107815

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop