# Response Spectrum Analysis of Multi-Story Shear Buildings Using Machine Learning Techniques

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Problem Formulation

#### 2.1. Response Analysis of MDOF Systems

#### 2.2. Response Spectrum

#### 2.3. Response Spectrum Method for MDOF Systems

## 3. Dataset Description and Exploratory Data Analysis

#### 3.1. Dataset Description

#### 3.2. Exploratory Data Analysis

## 4. Overview of ML Algorithms

#### 4.1. Ridge Regression (RR)

#### 4.2. Random Forest Regressor (RF)

#### 4.3. Gradient Boosting Regressor (GB)

**Step****1**.- Create a base tree with single root node that acts as the initial guess for all samples.
**Step****2**.- Create a new tree from the residual (loss errors) of the previous tree. The new tree in the sequence is fitted to the negative gradient of the loss function with respect to the current predictions.
**Step****3**.- Determine the optimal weight of the new tree by minimizing the overall loss function. This weight determines the contribution of the new tree in the final model.
**Step****4**.- Scale the tree by learning rate that determines the contribution of the tree in the prediction.
**Step****5**.- Combine the new tree with all the previous trees to predict the result and repeat Step 2 until a convergence criterion is satisfied (number of trees exceeds the maximum limit achieved or the new trees do not improve the prediction).

#### 4.4. CatBoost Regressor (CB)

## 5. ML Pipelines and Performance Results

#### 5.1. Cross Validation and Hyperparameter Tuning

#### 5.2. Model Evaluation Metrics

## 6. ML Interpretability

#### 6.1. Feature Importance

#### 6.2. Summary Plots

## 7. Test Case Scenarios

## 8. Web Application

## 9. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Solorzano, G.; Plevris, V. Computational intelligence methods in simulation and modeling of structures: A state-of-the-art review using bibliometric maps. Front. Built Environ.
**2022**, 8, 1049616. [Google Scholar] [CrossRef] - Georgioudakis, M.; Plevris, V. A Combined Modal Correlation Criterion for Structural Damage Identification with Noisy Modal Data. Adv. Civ. Eng.
**2018**, 3183067. [Google Scholar] [CrossRef] - Lagaros, N.D.; Plevris, V.; Kallioras, N.A. The Mosaic of Metaheuristic Algorithms in Structural Optimization. Arch. Comput. Methods Eng.
**2022**, 29, 5457–5492. [Google Scholar] [CrossRef] - Plevris, V.; Lagaros, N.D.; Charmpis, D.; Papadrakakis, M. Metamodel assisted techniques for structural optimization. In Proceedings of the First South-East European Conference on Computational Mechanics (SEECCM-06), Kragujevac, Serbia, 28–30 June 2006; pp. 271–278. [Google Scholar]
- Papadrakakis, M.; Lagaros, N.D.; Tsompanakis, Y.; Plevris, V. Large scale structural optimization: Computational methods and optimization algorithms. Arch. Comput. Methods Eng.
**2001**, 8, 239–301. [Google Scholar] [CrossRef] - Lagaros, N.; Tsompanakis, Y.; Fragiadakis, M.; Plevris, V.; Papadrakakis, M. Metamodel-based Computational Techniques for Solving Structural Optimization Problems Considering Uncertainties. In Structural Design Optimization Considering Uncertainties; Tsompanakis, Y., Lagaros, N., Papadrakakis, M., Eds.; Taylor & Francis: Abingdon, UK, 2008; Chapter 21; pp. 567–597. [Google Scholar]
- Lu, X.; Plevris, V.; Tsiatas, G.; De Domenico, D. Editorial: Artificial Intelligence-Powered Methodologies and Applications in Earthquake and Structural Engineering. Front. Built Environ.
**2022**, 8, 876077. [Google Scholar] [CrossRef] - Xie, Y.; Sichani, M.E.; Padgett, J.E.; DesRoches, R. The promise of implementing machine learning in earthquake engineering: A state-of-the-art review. Earthq. Spectra
**2020**, 36, 1769–1801. [Google Scholar] [CrossRef] - Zhang, Y.; Burton, H.V.; Sun, H.; Shokrabadi, M. A machine learning framework for assessing post-earthquake structural safety. Struct. Saf.
**2018**, 72, 1–16. [Google Scholar] [CrossRef] - Nguyen, H.D.; Dao, N.D.; Shin, M. Prediction of seismic drift responses of planar steel moment frames using artificial neural network and extreme gradient boosting. Eng. Struct.
**2021**, 242, 112518. [Google Scholar] [CrossRef] - Sadeghi Eshkevari, S.; Takáč, M.; Pakzad, S.N.; Jahani, M. DynNet: Physics-based neural architecture design for nonlinear structural response modeling and prediction. Eng. Struct.
**2021**, 229, 111582. [Google Scholar] [CrossRef] - Abd-Elhamed, A.; Shaban, Y.; Mahmoud, S. Predicting Dynamic Response of Structures under Earthquake Loads Using Logical Analysis of Data. Buildings
**2018**, 8, 61. [Google Scholar] [CrossRef] [Green Version] - Gharehbaghi, S.; Gandomi, M.; Plevris, V.; Gandomi, A.H. Prediction of seismic damage spectra using computational intelligence methods. Comput. Struct.
**2021**, 253, 106584. [Google Scholar] [CrossRef] - Kazemi, F.; Asgarkhani, N.; Jankowski, R. Machine learning-based seismic response and performance assessment of reinforced concrete buildings. Arch. Civ. Mech. Eng.
**2018**, 23, 94. [Google Scholar] [CrossRef] - Kazemi, F.; Jankowski, R. Machine learning-based prediction of seismic limit-state capacity of steel moment-resisting frames considering soil-structure interaction. Comput. Struct.
**2023**, 274, 106886. [Google Scholar] [CrossRef] - Wakjira, T.G.; Rahmzadeh, A.; Alam, M.S.; Tremblay, R. Explainable machine learning based efficient prediction tool for lateral cyclic response of post-tensioned base rocking steel bridge piers. Structures
**2022**, 44, 947–964. [Google Scholar] [CrossRef] - Montuori, R.; Nastri, E.; Piluso, V.; Todisco, P. A simplified performance based approach for the evaluation of seismic performances of steel frames. Eng. Struct.
**2020**, 224, 111222. [Google Scholar] [CrossRef] - EN 1998-1 (Eurocode 8); Design of Structures for Earthquake Resistance—Part 1: General Rules, Seismic Actions and Rules for Buildings. European Committee for Standardization: Brussels, Belgium, 2004.
- Hoerl, A.E.; Kennard, R.W. Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics
**1970**, 12, 55–67. [Google Scholar] [CrossRef] - Breiman, L. Random Forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef] [Green Version] - Friedman, J.H. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat.
**2001**, 29, 1189–1232. [Google Scholar] [CrossRef] - Prokhorenkova, L.O.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, QC, Canada, 4–6 December 2018; Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; pp. 6639–6649. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar] - Plevris, V.; Solorzano, G.; Bakas, N.; Ben Seghier, M.E.A. Investigation of performance metrics in regression analysis and machine learning-based prediction models. In Proceedings of the 8th European Congress on Computational Methods in Applied Sciences and Engineering, Oslo, Norway, 5–9 June 2022. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]

**Figure 3.**Box and Whisker plots for features (orange) and targets (moonstone blue). Red vertical line shows the median of each distribution. The blue dots represent the outliers in the distribution, according to IQR method.

**Figure 10.**Actual vs. Predicted plots for both (

**a**) train and (

**b**) test dataset for ${\tilde{V}}_{b}$ model.

**Figure 12.**Summary plots showing the impact of all features on (

**a**) ${T}_{1}$, (

**b**) ${U}_{top}$, and (

**c**) ${\tilde{V}}_{b}$ models.

**Figure 13.**Web application GUI for rapid predictions of the dynamic response of multi-story shear buildings.

Stories | $\tilde{\mathit{k}}$ | Ground Type | ${\mathit{T}}_{1}$ | ${\mathit{U}}_{\mathbf{top}}$ | ${\tilde{\mathit{V}}}_{\mathit{b}}$ | |
---|---|---|---|---|---|---|

Unit | [s${}^{-\mathbf{2}}$] | [s] | [m] | [m·s${}^{-\mathbf{2}}$] | ||

count | 1995 | 1995 | 1995 | 1995 | 1995 | 1995 |

mean | 11.000 | 7000.000 | 2.000 | 0.604 | 0.296 | 0.213 |

std | 5.479 | 3028.600 | 1.414 | 0.341 | 0.240 | 0.101 |

skew | 0.000 | 0.000 | 0.000 | 0.782 | 1.265 | 0.538 |

kurtosis | −1.207 | −1.205 | −1.300 | 0.435 | 2.162 | 0.072 |

min | 2.000 | 2000.000 | 0.000 | 0.093 | 0.004 | 0.032 |

25% | 6.000 | 4500.000 | 1.000 | 0.330 | 0.104 | 0.142 |

50% | 11.000 | 7000.000 | 2.000 | 0.567 | 0.254 | 0.201 |

75% | 16.000 | 9500.000 | 3.000 | 0.801 | 0.416 | 0.282 |

max | 20.000 | 12,000.000 | 4.000 | 1.834 | 1.571 | 0.553 |

Type | [int] | [float] | [int] | [float] | [float] | [float] |

Algorithm | Hyper-Parameter | Search Range | Model Optimal Value | ||
---|---|---|---|---|---|

${\mathbf{T}}_{\mathbf{1}}$ | ${\mathbf{U}}_{\mathbf{top}}$ | ${\tilde{\mathbf{V}}}_{\mathbf{b}}$ | |||

Ridge | alpha | [0, 0.1, 0.5, 1, 5, 10] | 10 | 10 | 10 |

max_iter | [50, 100, 500, 1000] | 50 | 50 | 50 | |

solver | [‘svd’, ‘cholesky’, ‘lsqr’] | lsqr | svd | svd | |

tol | [0.0001, 0.001] | 0.001 | 0.001 | 0.001 | |

Random Forest | n_estimators | [10, 20, 50, 100, 500] | 500 | 500 | 500 |

max_depth | [2, 5, 10] | 10 | 10 | 10 | |

criterion | [‘sqr’, ‘abs’, ‘fried’, ‘pois’] | fried | fried | fried | |

min_samples_split | [1, 2, 5, 10, 20] | 5 | 5 | 5 | |

min_samples_leaf | [1, 2, 5] | 1 | 1 | 2 | |

min_impurity_decrease | [0.01, 0.02, 0.05, 0.1, 0.2] | 0.01 | 0.01 | 0.01 | |

Gradient Boosting | n_estimators | [10, 20, 100, 500] | 500 | 500 | 500 |

learning_rate | [0.01, 0.1] | 0.1 | 0.1 | 0.1 | |

criterion | [‘sqr’, ‘fried’] | sqr | sqr | sqr | |

min_samples_leaf | [1, 2, 5, 10] | 1 | 1 | 10 | |

min_samples_split | [5, 10, 20, 100] | 10 | 5 | 5 | |

max_depth | [1, 2, 5, 10] | 10 | 5 | 10 | |

CatBoost | n_estimators | [10, 20, 100, 500] | 500 | 500 | 500 |

learning_rate | [0.01, 0.1] | 0.1 | 0.1 | 0.1 | |

l2_leaf_reg | [1, 2, 5, 10] | 1 | 1 | 2 | |

bagging_temperature | [0.0, 0.1, 0.2, 0.5, 1.0] | 0 | 0 | 0 | |

depth | [1, 2, 5, 10] | 5 | 5 | 10 |

Algorithm | Candidates | Model | Fit Time [s] | Test Score | ||
---|---|---|---|---|---|---|

Mean | Std Dev | Mean | Std Dev | |||

Ridge | 144 | ${T}_{1}$ | 0.003 | 0.000 | 0.915 | 0.006 |

${U}_{top}$ | 0.003 | 0.000 | 0.800 | 0.028 | ||

${\tilde{V}}_{b}$ | 0.003 | 0.000 | 0.684 | 0.034 | ||

Random Forest | 2880 | ${T}_{1}$ | 1.184 | 0.073 | 0.999 | 0.001 |

${U}_{top}$ | 1.137 | 0.013 | 0.988 | 0.001 | ||

${\tilde{V}}_{b}$ | 1.072 | 0.001 | 0.990 | 0.002 | ||

Gradient Boosting | 1024 | ${T}_{1}$ | 0.919 | 0.011 | 1.000 | 0.000 |

${U}_{top}$ | 0.583 | 0.001 | 0.999 | 0.001 | ||

${\tilde{V}}_{b}$ | 0.839 | 0.006 | 0.999 | 0.000 | ||

CatBoost | 640 | ${T}_{1}$ | 0.627 | 0.087 | 1.000 | 0.000 |

${U}_{top}$ | 0.218 | 0.032 | 0.999 | 0.000 | ||

${\tilde{V}}_{b}$ | 0.508 | 0.063 | 0.999 | 0.000 |

**Table 4.**Performance metrics of each ML algorithm and model. The finally selected algorithm (CatBoost) is highlighted with brown color.

ML Algorithm | RMSE | MAE | MAPE | R${}^{2}$ | ||||
---|---|---|---|---|---|---|---|---|

Train | Test | Train | Test | Train | Test | Train | Test | |

${T}_{1}$ | ||||||||

Ridge | 0.0098 | 0.0098 | 0.0743 | 0.0722 | 0.1916 | 0.1932 | 0.9163 | 0.9159 |

Random Forest | 0.0000 | 0.0000 | 0.0006 | 0.0013 | 0.0010 | 0.0023 | 1.0000 | 1.0000 |

Gradient Boosting | 0.0000 | 0.0000 | 0.0040 | 0.0041 | 0.0086 | 0.0097 | 0.9997 | 0.9997 |

CatBoost | 0.0000 | 0.0000 | 0.0006 | 0.0008 | 0.0013 | 0.0020 | 1.0000 | 1.0000 |

${U}_{top}$ | ||||||||

Ridge | 0.0123 | 0.0090 | 0.0754 | 0.0699 | 1.0630 | 1.0817 | 0.7924 | 0.8244 |

Random Forest | 0.0000 | 0.0002 | 0.0035 | 0.0084 | 0.0150 | 0.0400 | 0.9994 | 0.9962 |

Gradient Boosting | 0.0005 | 0.0006 | 0.0146 | 0.0164 | 0.1156 | 0.1488 | 0.9920 | 0.9889 |

CatBoost | 0.0000 | 0.0000 | 0.0026 | 0.0034 | 0.0182 | 0.0238 | 0.9998 | 0.9995 |

${\tilde{V}}_{b}$ | ||||||||

Ridge | 0.0031 | 0.0032 | 0.0440 | 0.0450 | 0.2833 | 0.3061 | 0.7010 | 0.6727 |

Random Forest | 0.0000 | 0.0000 | 0.0004 | 0.0010 | 0.0024 | 0.0060 | 0.9999 | 0.9994 |

Gradient Boosting | 0.0001 | 0.0001 | 0.0073 | 0.0080 | 0.0372 | 0.0441 | 0.9894 | 0.9866 |

CatBoost | 0.0000 | 0.0000 | 0.0015 | 0.0020 | 0.0079 | 0.0110 | 0.9995 | 0.9990 |

Scenario | Stories | Mass (m) | Stiffness (k) | Ground Type | ${\mathit{a}}_{\mathit{g}}$ |
---|---|---|---|---|---|

[-] | [kg] | [N/m] | [-] | [m/s${}^{2}$] | |

1 | 3 | 112 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ | 235 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | B | 0.32 |

2 | 8 | 185 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ | 950 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | A | 0.24 |

3 | 15 | 265 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ | 1900 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | C | 0.16 |

**Table 6.**Target values (actual and predicted) for each test case scenario. The absolute error is also provided.

${\mathit{T}}_{1}$ | ${\mathit{U}}_{\mathbf{top}}$ | ${\mathit{V}}_{\mathit{b}}$ | ||
---|---|---|---|---|

[s] | [m] | [N] | ||

Scenario 1 | Actual | 0.308 | 0.0275 | 2.899 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ |

Predicted | 0.314 | 0.0285 | 2.816 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | |

Absolute Error | 1.95% | 3.49% | 2.85% | |

Scenario 2 | Actual | 0.475 | 0.0358 | 6.333 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ |

Predicted | 0.482 | 0.0365 | 6.336 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | |

Absolute Error | 1.47% | 2.01% | 0.05% | |

Scenario 3 | Actual | 0.733 | 0.0638 | 1.241 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ |

Predicted | 0.740 | 0.0650 | 1.241 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | |

Absolute Error | 0.95% | 1.75% | 0.53% |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Georgioudakis, M.; Plevris, V.
Response Spectrum Analysis of Multi-Story Shear Buildings Using Machine Learning Techniques. *Computation* **2023**, *11*, 126.
https://doi.org/10.3390/computation11070126

**AMA Style**

Georgioudakis M, Plevris V.
Response Spectrum Analysis of Multi-Story Shear Buildings Using Machine Learning Techniques. *Computation*. 2023; 11(7):126.
https://doi.org/10.3390/computation11070126

**Chicago/Turabian Style**

Georgioudakis, Manolis, and Vagelis Plevris.
2023. "Response Spectrum Analysis of Multi-Story Shear Buildings Using Machine Learning Techniques" *Computation* 11, no. 7: 126.
https://doi.org/10.3390/computation11070126