Next Article in Journal
Biodiesel Purification via Ultrasonic-Assisted Solvent-Aided Crystallization
Next Article in Special Issue
Prediction on Permeability of Engineered Cementitious Composites
Previous Article in Journal
Bis (Diamines) Cu and Zn Complexes of Flurbiprofen as Potential Cholinesterase Inhibitors: In Vitro Studies and Docking Simulations
Previous Article in Special Issue
Experimental Study and Design of Experiment Using Statistical Analysis for the Development of Geopolymer Matrix for Oil-Well Cementing for Enhancing the Integrity
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Prediction of Neutralization Depth of R.C. Bridges Using Machine Learning Methods

School of Civil Engineering, Southeast University, Nanjing 211189, China
Jiangsu Dongnan Special Engineering & Technology Co. LTD, Nanjing 211189, China
School of Transportation, Southeast University, Nanjing 211189, China
Author to whom correspondence should be addressed.
Crystals 2021, 11(2), 210;
Submission received: 28 December 2020 / Revised: 8 February 2021 / Accepted: 17 February 2021 / Published: 20 February 2021


Machine learning techniques have become a popular solution to prediction problems. These approaches show excellent performance without being explicitly programmed. In this paper, 448 sets of data were collected to predict the neutralization depth of concrete bridges in China. Random forest was used for parameter selection. Besides this, four machine learning methods, such as support vector machine (SVM), k-nearest neighbor (KNN) and XGBoost, were adopted to develop models. The results show that machine learning models obtain a high accuracy (>80%) and an acceptable macro recall rate (>80%) even with only four parameters. For SVM models, the radial basis function has a better performance than other kernel functions. The radial basis kernel SVM method has the highest verification accuracy (91%) and the highest macro recall rate (86%). Besides this, the preference of different methods is revealed in this study.

1. Introduction

The neutralization of concrete is a major factor that influences the service life of R.C. bridges. The alkaline environment around steel bars will be impaired by carbon dioxide and other acid materials, such as acid rain [1,2]. Subsequently, steel bars are likely to be oxidized, especially with the effect of chloride ions and moisture in the concrete. Once the steel bars are corroded, the bearing capacity of bridges will be impaired [3,4,5].
Currently, the total number of bridges in China is nearly one million, and many bridges have been in service for more than ten years. It is necessary to provide a solution for the prediction of the neutralization depth of existing bridges. However, in real engineering, the influence factors are coupled, which makes it difficult to estimate the neutralization status of concrete. Besides this, for existing bridges, another difficulty is the loss of original bridge construction information. However, some information (e.g., water cement ratio, maximum nominal aggregate size, and cement content) is often considered necessary for the prediction of neutralization depth. Besides this, predicting the neutralization of the concrete in inland river bridges is interesting as regards the special service ambiance of these concrete components. The influences of the river, wind, traffic load and some unknown factors are significant. However, accurately quantifying the effects of these factors by formulas is difficult.
Machine learning (ML) is proving to be an efficient approach to solving the above problems. ML refers to the capability of computers to obtain knowledge from datasets without being explicitly programmed [6]. It includes many powerful methods, such as support vector machine (SVM), decision tree, k-means, AdaBoost and k-nearest neighbor (KNN). One ML method mainly consists of two parts: the decision function and objective function. For a new data point, the decision function is used to predict its category. The decision function contains some pending parameters that must be determined by optimizing the objective function. The objective function at least contains a loss function and a regularization item. The loss function depicts the gap between true values and prediction values; the regularization item is used to avoid model overfitting. ML methods have been widely used in civil engineering. The first application of ML was to promote structural safety [7]. Nowadays, ML is used in structural health monitoring [8,9,10,11], reliability analysis [12,13], and earthquake engineering [14,15,16].
In addition, machine learning techniques show great potential in the concrete industry. The complexity of concrete makes it difficult to developing prediction models. However, models developed by ML methods always achieve a high accuracy [17,18,19,20]. Topçu et al. [21] proposed an artificial neural networks (ANN) model to evaluate the effect of fly ash on the compressive strength of concrete. The results show that the root-mean-squared error (RMSE) of the ANN model is less than 3.0. Bilim et al. [22] constructed an ANN model to predict the compressive strength of ground granulated blast furnace slag (GGBFS) concrete. Sarıdemir et al. [23] used ANN and a fuzzy logic method to predict the long-term effects of GGBFS on the compressive strength of concrete. Their results show that the fuzzy logic model has a low RMSE (3.379); however, the ANN model’s RMSE (2.511) is lower. Golafshani et al. [24] used grey wolf optimizer to improve the performance of the ANN model and an adaptive neuro-fuzzy inference system model in predicting the compressive strength of concrete. Kandiri et al. [25] developed some ANN models with a slap swarm algorithm to estimate the compressive strength of concrete. The results show that this algorithm can reduce the RMSE of ANN models. Machine learning methods can be used for classification problems, regress problems, feature selection and data mining. Compared with conventional models, machine learning models are good at gaining information from data.
Machine learning can select a few effective parameters for developing models. Han et al. [26] measured the importance of parameters based on the random forest method, and then used this approach to establish prediction models. Their results show that the performance of models can be obviously improved by parameter selection. Random forest is an effective feature selection method [27]. It is widely used in bioscience [28,29], computer science [30], environmental sciences [31], and many other fields. Zhang et al. [32] used random forest to select important features from a building energy consumption dataset. Yuan et al. [33] employed random forest to rank the features of house coal consumption.
In this paper, random forest was adopted for parameter selection. SVM, KNN, AdaBoost and XGBoost were used to develop prediction models. These ML methods have been successfully used in many fields [34,35,36,37]. A comparison among these ML models was also conducted to reveal the preference for different methods in the prediction of neutralization depth.

2. Dataset Description and Analysis

The dataset, which focuses on the neutralization depth of R.C. bridges in China, includes 448 samples. Parameters such as service time, concrete strength, bridge load class and environmental conditions were considered in this study. Figure 1 shows the distribution of these bridges. The full information of the dataset is shown in the Appendix A. The dataset was collected from references [38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68], and the meteorological data of the city were collected from the environmental meteorological data center of China. These references [38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68] are included in two professional Chinese document databases: CNKI and WANFANG DATA. All the samples included in the Appendix A are the detection data of existing bridges.
The neutralization depth of concrete was tested with phenolphthalein, and the compressive strength was measured with a resiliometer and calculated according to the Technical Specification for Inspecting of Concrete Compressive Strength by Rebound Method (JGJ/T 23-2011). The vehicle load of bridges was divided into two levels, according to the General Specifications for Design of Highway Bridges and Culverts (JTG D60-2015). There are no missing data in the dataset; any samples containing missing values were abandoned during collection. Table 1 gives a detailed description of the dataset. The climatic division used in Table 1 is derived from the climatic division map of the geographic atlas of China (Peking University) [69].
The reliability of the ML models relies on the quality of the dataset. Generally, ML models have an excellent performance within the scope of the training dataset. However, predicting the neutralization depth of a new sample outside the range of the training dataset is difficult for ML models. Therefore, a dataset with a wide scope is necessary for the reliability of ML models. Table 1 shows the range of this dataset. The service time, compressive strength, and load level of samples in the dataset can cover the main status of existing bridges. The temperature, humidity, acid rain, and climate status can cover the majority of service environments of existing bridges in China. The distribution of samples shown in Figure 1 also shows that the dataset has a large scope. Besides this, the histograms in Table 1 show that the values of samples have good continuity. Therefore, it is believed that this dataset is effective for developing ML models.
The imbalanced distribution of temperature and RH is also revealed in the histograms. The RH of most of the samples is around 72.5–82.5%, and the temperature of most of the samples is around 15 °C. The parts that have few points will receive less attention from the ML models because of the imbalance in the dataset. However, this negative effect caused by imbalances can be alleviated through increasing the penalty applied to the misclassification of the samples in these parts.

3. Parameter Evaluation and Selection

This study aims to develop diagnosis models for existing R.C. bridges, so parameter selection is important as it will alleviate the difficulty of obtaining parameters in real engineering. Random forest is widely used in feature selection [27]. It is used for supervised learning, and it does not require the dataset to obey normal distribution [33]. Obviously, the dataset used in this study does not obey normal distribution.

3.1. Random Forest for Parameter Evaluation

Random forest is a combination of decision trees. First, n samples are selected from the dataset as a training set through put-back sampling, and a decision tree is generated from these n samples. Then d features are randomly selected and split at each node of the decision tree. The above process is repeated k times ( k is the number of decision trees in a random forest), and finally a random forest model is generated.
In the process of generating decision trees, the Gini coefficient is usually used to split nodes. Random forest evaluates the importance of parameters by calculating the average change in the Gini coefficient of feature f i   ( i = 1 ,   2 , ,   d ) during the splitting process of the nodes. Assuming there are, in total, d features in kth decision tree, the probability of a sample belonging to class m is p m , and there are M classes; then, the Gini coefficient is defined as:
G i n i ( p ) = m = 1 M p m ( 1 p m )
For dataset D , the Gini coefficient is:
G i n i ( D ) = m = 1 M | C m | | D | ( 1 | C m | | D | )
where C m is the subset of samples belonging to class m in the dataset D . On node n , feature f i divides dataset D into two parts, D 1 and D 2 , so the changes in the Gini coefficient are:
V I M i , n g i n i = G i n i ( D ) | D 1 | | D | G i n i ( D 1 ) | D 2 | | D | G i n i ( D 2 )
Therefore, the importance of parameter f i in the k th decision tree is:
V I M i , k g i n i = n = 1 N i V I M i , n g i n i
where N i is the number of nodes divided by feature f i on dataset D . Therefore, the importance of feature f i can be calculated by Equation (5):
V I M i g i n i = k = 1 K V I M i , k g i n i j = 1 d k = 1 K V I M j , k g i n i
where d is the number of characteristics, and K is the number of decision trees in the random forest.

3.2. Results and Discussions

Figure 2 shows the results of parameter evaluation. It is noted that temperature, concrete strength f , RH and age are more important than climate, location of components, acid rain, and load level. The cumulative importance of the top four parameters reaches 0.73. Climate, which represents the rough ambient conditions of neutralization, is often considered an important feature. However, due to its high correlation with other environment parameters, the results show that it is not so important. The random forest approach tends to place some of the highly correlated features on the top, but place the others at the end.
Further, Climate and Loc are noun parameters. Generally, a noun parameter cannot be used directly to establish models. A common approach for preprocessing noun parameters is unique heat coding, and this will create a new virtual parameter for each unique value of the noun parameter. Therefore, the old parameter Climate will generate six new parameters, and Loc will generate four new parameters. Adding so many new parameters is unnecessary, because of their low importance. Therefore, Climate, Loc, pH and p were omitted in the next study.
In addition, it is important to discuss the limitations of the ML models used in this study. For ML models, their validity scope depends on the range of the dataset. These models are actually empirical models. In this study, the ML models were based on age, RH, f and t , so the validity scope of the models is a four-dimensional space determined by the dataset. The search algorithm can be used to determine the valid scope of the models. For example, when a new sample is obtained and one wants to know if the sample is in the valid scope, one can search the dataset and find the new sample’s neighboring points. Then, the neighboring points can be used for judging if the new sample is in the valid range.

4. Machine Learning Models

The current prediction models require a large number of input parameters, and output a mean value of neutral depth. However, the dispersion of concrete’s neutral depth is great. Figure 3 illustrates the histogram of the neutralization depth data of the Nanjing Yangtze river bridge’s concrete components. All components in Figure 3 have the same service time and concrete mix proportion. It is noted that the discreteness of those components is obvious. Thus, this paper decided to predict the level of neutral depth. Table 2 shows the classification of the neutral depth of concrete of bridges. 6 mm was chose as a boundary between slight level and medium level in this study. This is because the relationship between the neutralization depth and concrete’s compressive strength will become uncertain in the appraisal of old buildings when the neutralization depth of concrete is greater than 6 mm. According to Technical Specification for Inspecting of Concrete Compressive Strength by Rebound Method (JGJ/T 23-2011), when the neutralization depth is greater than 6 mm, the test results cannot reflect the actual strength of the concrete. In addition, 25mm was selected as the boundary between the medium level and the serious level, since the protective layer thickness of components of bridges in China is often between 20 and 30mm. When the neutral depth reaches 25mm, the neutral area is likely to reach the surface of the steel bars.

4.1. Support Vector Machine

SVM is a binary classification model; its purpose is to find a hyperplane in order to classify samples into two classes [70]. SVM finds the hyperplane by maximizing the margin between the two classes. The margin refers to the shortest distance between the closest data points to the hyperplane. Therefore, only a few points, which are called support vectors, can influence the hyperplane. Because the majority of the samples are insignificant, SVM offers one of the most robust and accurate algorithms among all well-known modeling methods when the dataset is not huge [37]. Considering the size of the dataset used in this study, SVM is obviously attractive. Figure 4 shows an illustration of SVM.
Developing an SVM model can help in solving the following problem [70]:
min α   1 2 i = 1 N j = 1 N α i α j y i y j K ( x i · x j ) i = 1 N α i s . t . C α i 0 , i = 1 , 2 , 3 , , N i = 1 N α i y i = 0 ,
K(x·z) is the kernel function. The data points in a low-dimensional space can be transformed into the data points in a high-dimensional space through the kernel function [70]. Therefore, a nonlinear problem can turn into a linear problem. Figure 5 depicts the effects of the kernel function. Common kernel functions include polynomial kernel, radial basis kernel and hyperbolic tangent kernel. ( x i ,   y i ) is the sample point, and N is the number of samples in the dataset. C indicates the penalty for misclassification. α * = α 1 * ,   α 2 * , ,   α N * T can be obtained by solving Equation (6). Then, the decision function can finally be obtained:
f ( x ) = sign ( i = 1 N α i * y i K ( x , x i ) + b * ) . s . t .   b * = y j i = 1 N α i * y i K ( x i , x j )

4.2. K-Nearest Neighbor

KNN is one of the most concise classification algorithms, and it is also recognized as one of the top ten data mining algorithms [37]. For a new sample, KNN will find k samples closest to this sample in the dataset. The classification of this new sample depends on the voting results of those k samples. Figure 6 shows an illustration of KNN. In Figure 6, we suppose k = 4 , and the four closest data points to the new sample are marked with numbers. Points 1, 2, and 3 belong to class C, and only point 4 belongs to class A. Therefore, this new sample should be classified into class C. Compared with other ML methods, KNN is simpler, but is also effective [71]. KNN is often used for comparison with other ML methods in some studies [71,72], as well as this study.
The decision function of KNN can be written as follows:
f ( x ) = arg max c j x i N k ( x ) I ( y i = c i )
c j represents the class j   ( j   =   1 , 2 , , k ) . N k ( x ) is the range that covers these k samples. I ( y i = c i ) is an indicator function; if y i = c i , then I ( y i = c i ) = 1 , otherwise, I ( y i = c i ) = 0 ( i = 1 , 2 , , N ) . The purpose of KNN is to find the optimal number k of nearest neighbors.

4.3. AdaBoost

AdaBoost is one of the most representative methods in machine learning [37]. AdaBoost is a famous ensemble learning algorithm. This method will develop a lot of weak classifiers, and finally combines these weak classifiers into a strong classifier. Therefore, the decision function can be written as follows [37]:
f ( x ) = m = 1 M α m G m ( x )
G m ( x ) is the decision function of the m th weak classifier ( m = 1 , 2 , , M ) . α m is the weight of G m ( x ) , and this coefficient is calculated by the accuracy of G m ( x ) . In this study, decision tree models were used as the weak classifiers. AdaBoost first generates a weak decision tree model, gets its decision function G 1 ( x ) , and updates the weight of the samples according to the performance of G 1 ( x ) . If one data point is misclassified by G 1 ( x ) , it will be assigned a greater weight in the next round. The weight of samples is updated in the ( m 1 ) th round, and G m ( x ) will be fitted based on these samples and their weights in the m th round. α m , the weight of G m ( x ) , will be calculated via the accuracy of G m ( x ) .

4.4. XGBoost

XGBoost (extreme gradient boosting) was proposed in 2016, and soon became a popular method for its excellent performance in Kaggle competitions [73]. XGBoost is one of the most popular emerging ML approaches. However, its application in civil engineering is not as common as the application of other conventional ML methods, such as SVM, KNN, and ANN. Most of the applications of XGBoost in civil engineering have been undertaken in the last two years. Hu et al. [74] used XGBoost to predict the wind pressure coefficients of buildings. Pei et al. [75] developed a pavement aggregate shape classifier based on XGBoost. In this study, XGBoost is selected on behalf of the other new ML methods for the comparison with other representative ML methods.
The XGBoost method first develops a weak classifier. Then, the next weak classifier is designed to reduce the gap between the true value and the prediction value of the first weak classifier. For m th training, the decision function can be written as follows:
f m ( x ) = f m 1 ( x ) + α m F m ( x )
α m represents the weight of weak classifier F m ( x ) . When the mean-squared error is chosen as the loss function of the models, the objective function requiring optimization in generating a new weak classifier can be written as follows:
O b j m = i = 1 N [ y i ( f m 1 ( x i ) + α m F m ( x i ) ) ] 2 + j = 1 m Ω ( f j ( x ) )
Ω ( f j ( x ) ) is a regularization item, and N is the number of samples. In this study, the tree model was selected as the weak classifier. The tree model is the commonest weak classifier in the application of XGBoost.

4.5. Multi-Class Problem

Some machine learning methods (e.g., SVM) are designed for binary classification problems, but a multi-class problem was studied in this study. Therefore, a one-to-one strategy (OVO) is considered. OVO is a common approach for multi-class problems [76,77,78]. OVO methods generate a hyperplane between any two categories, and will generate N ( N 1 ) / 2 hyperplanes for an N classification problem. For a new sample, all models are utilized, and the final results depend on the vote among ML models.

4.6. Results and Discussions

Z-score normalization is used for the normalization of the dataset. Normalization can alleviate the influence of the parameters’ different scales. Besides this, in order to improve the reliability of ML models, it is necessary to divide all samples into two parts: T1 and T2. T1, the training dataset, is used for training ML models, and T2, the testing dataset, is used for testing the performance of ML models. This study made 70% of the original dataset into a training dataset.
Training accuracy, verification accuracy and macro recall rate were used as the indicators for the optimization of the parameters of the models. The mesh search tuning approach was used to find the optimal values of the parameters of the models. Under-fitting and over-fitting can be avoided by comparing the model’s training accuracy and verification accuracy. Besides this, in order to improve the reliability of results, the dataset was randomly divided 10 times, and each child dataset has the same distribution. The final results were based on the performances of these 10 models. Table 3 shows the results of mesh search tuning.
Table 3 shows that the training accuracy of ML models is very close to their verification accuracy, which illustrates that overfitting is avoided. Both accuracy and macro recall rate were adopted for estimating ML models. In fact, macro recall rate is more important than accuracy when the imbalance of datasets is considered. Macro recall rate is an index for depicting the ratio of samples correctly classified by the classifier to samples that should be correctly classified. Obviously, the verification accuracy (91%) and the macro recall rate (86%) of the radial basis kernel SVM model are higher than those of other models. Besides this, the gap between the verification accuracy and the training accuracy of the radial basis kernel SVM model is 2%, so there is no obvious evidence of overfitting. Besides this, the performance of radial basis kernel and polynomial kernel are better than that of hyperbolic tangent kernel. Radial basis kernel is the best kernel function in this study. Compared with other methods, KNN also seemed to be attractive. Besides this, the maximum gap between the models in terms of verification accuracy is 22%. However, for macro recall rate, the maximum gap can reach 40%. This may be due to the influence of the uneven distribution of the dataset. The macro recall rate is sensitive to imbalance data. As an evaluation index, the macro recall rate is more representative.
Table 4 shows the obfuscation matrixes of models. All models were established by scikit-learn. For Li and Lj (i, j = 1, 2, 3) in Table 4, the value in row Li and column Lj represents the number of samples that are actually class Li but are predicted to be class Lj by the models. The green area in Table 4 shows the number of test samples that are rightly classified, and the yellow area shows the number of test samples that are wrongly classified. Based on Table 4, the accuracy of the models in terms of the neutralization level of concrete can be obtained (Figure 7).
Even though the radial basis kernel SVM model has the highest verification accuracy and the highest macro recall rate, Table 4 and Figure 7 show that the KNN model is better at classifying the samples with a slight level than other methods (accuracy > 97%). However, compared with other models, the KNN model only achieves a moderate performance in the prediction of medium-level samples (accuracy = 81%). The AdaBoost model is the best classifier in predicting the neutralization depth of medium-level samples (accuracy > 93%).
Besides this, Figure 8 and Figure 9 show that most ML models reach a high accuracy in predicting the neutralization depth of concrete components with a longer service life (20–39 years) and a higher compressive strength (40–59 Mpa). Figure 10 and Figure 11 show that most ML models achieve a high accuracy in predicting the neutralization depth of concrete component in a lower temperature (13–16 °C) and a lower humidity (71–75%) environment. It is believed that when the neutralization depth of concrete reaches the boundary of two levels, the accuracy will decline. Therefore, a rough warning range for the neutralization depth of concrete in terms of parameters can be obtained. For instance, for service time, the range is 10–19. The warning range implies that the neutralization depth of concrete is likely to reach the next level, and more attention should be paid to these bridges.

5. Conclusions

In this paper, four-parameter ML models for predicting the neutralization depth levels of the concrete components of existing bridges were established. Four representative ML methods were used in this study. The following conclusions can be drawn:
  • This study used SVM, KNN, AdaBoost and XGBoost to predict the neutralization depth level of the concrete of existing bridges, and the results show that the radial basis kernel SVM model has the highest validation accuracy (91%) and the highest macro recall rate (86%), with only four parameters. The radial basis kernel function is the best kernel functions in this study. Compared with other models, the radial basis kernel SVM model and KNN model achieve a better performance;
  • The results reveal the preference of ML methods. KNN is good at classifying slight-level samples (accuracy > 97%), and AdaBoost is the best method for the prediction of medium-level samples (accuracy > 93%). Machine learning shows great potential in predicting the neutralization depth of concrete with very few parameters, and evaluating the durability level of existing bridges;
  • Random forest was used for parameter selection. The results show that temperature, concrete strength, RH and service time are more important than climate, acid rain, location of components, and load level. The cumulative importance of these top four parameters reaches 73%. The performance of the models shows that random forest is an effective approach for parameter selection.

Author Contributions

Conceptualization, S.C. and K.D.; methodology, K.D.; data curation, K.D.; writing, K.D.; supervision, S.C., J.L. and C.X.; project administration, S.C., J.L. and C.X. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository. The data presented in this study are openly available in the Appendix A.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Detailed information of 448 sets of data.
Table A1. Detailed information of 448 sets of data.
AgeftRHpHpClimateLoc 1dReferenceAgeftRHpHpClimateLoc 1dReference
4925167552north subtropicsAr19.72self-test74521.88361edge tropicsBpier2.8[56]
4935167552north subtropicsAr11.4874521.88361edge tropicsBpier1.5
4925167552north subtropicsAr17.02233520.57742south subtropicsAr7.483[57]
4920167552north subtropicsAr20.23234520.57742south subtropicsAr5.937
4915167552north subtropicsAr16.47234020.57742south subtropicsAr11.453
4920167552north subtropicsAr20.23234520.57742south subtropicsAr4.917
4925167552north subtropicsAr11.11235020.57742south subtropicsAr4.35
4920167552north subtropicsAr12.15233520.57742south subtropicsAr4.563
4925167552north subtropicsAr16.38236020.57742south subtropicsAr4
4915167552north subtropicsAr23.45236020.57742south subtropicsAr3.577
4920167552north subtropicsAr20.62235520.57742south subtropicsAr3.673
4915167552north subtropicsAr21.57235520.57742south subtropicsAr4.12
4930167552north subtropicsBpla8.4235020.57742south subtropicsAr3.63
4930167552north subtropicsBpla8.4235520.57742south subtropicsAr3.56
4930167552north subtropicsBpla5.43235020.57742south subtropicsAr4.343
4930167552north subtropicsBpla5.43235020.57742south subtropicsAr3.853
4915167552north subtropicsBpier18.55236020.57742south subtropicsAr4.497
4915167552north subtropicsBpier18.55235020.57742south subtropicsAr5.067
1955167551north subtropicsBm1236020.57742south subtropicsAr3.3
1945167551north subtropicsBm1.5235520.57742south subtropicsAr3.267
1950167551north subtropicsBm2235520.57742south subtropicsAr4.453
1955167551north subtropicsBm1.5234520.57742south subtropicsAr4.097
1950167551north subtropicsBm2235520.57742south subtropicsAr4.47
1950167551north subtropicsBm1.5235020.57742south subtropicsAr4.31
1935167551north subtropicsBm1234520.57742south subtropicsAr5.18
1950167551north subtropicsBm1234520.57742south subtropicsAr4.7
1945167551north subtropicsBm1232020.57742south subtropicsAr14.773
1945167551north subtropicsBm1234520.57742south subtropicsAr3.85
1940167551north subtropicsBm1234020.57742south subtropicsAr4.513
1945167551north subtropicsBm0.5234520.57742south subtropicsAr4.467
1940167551north subtropicsBm0.5235020.57742south subtropicsAr9.923
1945167551north subtropicsBpier1235020.57742south subtropicsAr6.83
1945167551north subtropicsBpier1.5233020.57742south subtropicsBpier16.51
1945167551north subtropicsBpier1.5233020.57742south subtropicsBpier6.037
1950167551north subtropicsBpier1232520.57742south subtropicsBpier5.857
1940167551north subtropicsBpier2232020.57742south subtropicsBpier6.753
1950167551north subtropicsBpier1232520.57742south subtropicsBpier16.21
1945167551north subtropicsBpier0.5233520.57742south subtropicsBpier11.597
204016.28241mid-subtropicsAr4[38]3255.86562mid temperate zoneBm6.2[58]
204516.28241mid-subtropicsBm3.53255.86562mid temperate zoneBm6
204516.28241mid-subtropicsBm3.53255.86562mid temperate zoneBm6
204016.28241mid-subtropicsBm3.53255.86562mid temperate zoneBm6.5
204016.28241mid-subtropicsBm43255.86562mid temperate zoneBm5.5
204016.28241mid-subtropicsBpier3.53255.86562mid temperate zoneBm5.5
354017.48041mid-subtropicsAr3[39]3255.86562mid temperate zoneBm5.8
352517.48041mid-subtropicsAr2.53255.86562mid temperate zoneBm6.2
354017.48041mid-subtropicsBpla3202514.47261warm temperateBm9.7[59]
354517.48041mid-subtropicsBm1.5203514.47261warm temperateBpier11.2
324516.27842north subtropicsAr8.53[40]123022.77652south subtropicsBm1.68[60]
324516.27842north subtropicsAr8.63123522.77652south subtropicsBm1.46
324516.27842north subtropicsAr8.4123022.77652south subtropicsBm1.63
324016.27842north subtropicsAr8.73123022.77652south subtropicsBm1.53
324016.27842north subtropicsAr8.9326254.26261mid temperate zoneBm7.6 (accessed on 20 February 2021)
324016.27842north subtropicsAr9.0726254.26261mid temperate zoneBm9.1
323516.27842north subtropicsAr8.726304.26261mid temperate zoneBm9
323516.27842north subtropicsAr926254.26261mid temperate zoneBm9.5
323516.27842north subtropicsAr8.626254.26261mid temperate zoneBm10.2
323516.27842north subtropicsAr8.8326204.26261mid temperate zoneBpla13.5
323516.27842north subtropicsAr9.226254.26261mid temperate zoneBpier9.72
324016.27842north subtropicsAr9.226204.26261mid temperate zoneBpier8.8
324016.27842north subtropicsAr8.63125017.27741north subtropicsBm1.9https://wenku.baiducom/view/73a61a52f5335a8102d220b8.html?sxts=1575779109843 (accessed on 20 February 2021)
324016.27842north subtropicsAr8.83125017.27741north subtropicsBm2.5
323516.27842north subtropicsAr9.37125017.27741north subtropicsBm3
324516.27842north subtropicsAr8.47125517.27741north subtropicsBm2.7
324016.27842north subtropicsAr8.9125017.27741north subtropicsBm1.5
324016.27842north subtropicsAr7.97125017.27741north subtropicsBm3.1
324516.27842north subtropicsAr8.83125017.27741north subtropicsBm3.5
324016.27842north subtropicsAr8.2125017.27741north subtropicsBm3
324016.27842north subtropicsAr7.93125517.27741north subtropicsBm2.5
324016.27842north subtropicsAr9.23125517.27741north subtropicsBm3.5
324016.27842north subtropicsAr8.83125017.27741north subtropicsBm2.9
324016.27842north subtropicsAr8.6125017.27741north subtropicsBm2.5
414014.57052warm temperateBm6[41]125017.27741north subtropicsBm2.5
414014.57052warm temperateBm9125517.27741north subtropicsBm1.6
414014.57052warm temperateBm7125017.27741north subtropicsBm3.8
414014.57052warm temperateBm8125017.27741north subtropicsBm3.5
413514.57052warm temperateBm6123517.27741north subtropicsBm4
413514.57052warm temperateBm5123017.27741north subtropicsBpier3.5
413514.57052warm temperateBm9123517.27741north subtropicsBpier2.5
413014.57052warm temperateBm5123017.27741north subtropicsBm2.6
413014.57052warm temperateBm5123017.27741north subtropicsBpier3.5
414514.57052warm temperateBm7123517.27741north subtropicsBpier4
413514.57052warm temperateBm4123017.27741north subtropicsBm4
413514.57052warm temperateBm6123017.27741north subtropicsBpier3
413514.57052warm temperateBm10123017.27741north subtropicsBpier3.5
162019.97842mid-subtropicsBpier4[42]123517.27741north subtropicsBm3.9
161519.97842mid-subtropicsBpier4.5123017.27741north subtropicsBpier3.8
163519.97842mid-subtropicsBpier4123517.27741north subtropicsBpier2.5
162019.97842mid-subtropicsBpier520604.26261mid temperate zoneBpier0[61]
162019.97842mid-subtropicsBpier420604.26261mid temperate zoneBpier0
162019.97842mid-subtropicsBpier4.520554.26261mid temperate zoneBpier0
163019.97842mid-subtropicsBpier5.520604.26261mid temperate zoneBpier0
204016.67742mid-subtropicsAr1[43]20504.26261mid temperate zoneBpier0
203516.67742mid-subtropicsAr120504.26261mid temperate zoneBpier0
203516.67742mid-subtropicsAr120554.26261mid temperate zoneBpier0
203016.67742mid-subtropicsAr120554.26261mid temperate zoneBpier0
204016.67742mid-subtropicsAr120504.26261mid temperate zoneBpier0
203016.67742mid-subtropicsAr120454.26261mid temperate zoneBpier0.8
203516.67742mid-subtropicsAr120604.26261mid temperate zoneBpier0
203516.67742mid-subtropicsAr120404.26261mid temperate zoneBpier0
204016.67742mid-subtropicsAr120354.26261mid temperate zoneBpier0.8
203516.67742mid-subtropicsAr320454.26261mid temperate zoneBpier0
203516.67742mid-subtropicsAr420354.26261mid temperate zoneBpier1.4
203516.67742mid-subtropicsAr220304.26261mid temperate zoneBpier2.6
203516.67742mid-subtropicsAr220354.26261mid temperate zoneBpier1.8
204516.67742mid-subtropicsAr120404.26261mid temperate zoneBpier0.2
204016.67742mid-subtropicsAr120554.26261mid temperate zoneBpier0
203516.67742mid-subtropicsAr220404.26261mid temperate zoneBpier1.8
203516.67742mid-subtropicsAr220504.26261mid temperate zoneBpier0.4
204516.67742mid-subtropicsAr320504.26261mid temperate zoneBpier0
203516.67742mid-subtropicsAr1213512.85561warm temperateBm12[62]
204016.67742mid-subtropicsAr1213512.85561warm temperateBm10
204516.67742mid-subtropicsAr2213512.85561warm temperateBm9
203516.67742mid-subtropicsAr2212512.85561warm temperateBm10
203516.67742mid-subtropicsAr2213012.85561warm temperateBpier11
95516.57941mid-subtropicsBm0.5[44]213512.85561warm temperateBpier12
95516.57941mid-subtropicsBm0.5213012.85561warm temperateBpier14
95516.57941mid-subtropicsBm1213512.85561warm temperateBpier10
95516.57941mid-subtropicsBm0.5214512.85562warm temperateBm13
95516.57941mid-subtropicsBm0.5213512.85562warm temperateBm12
95516.57941mid-subtropicsBm1214012.85562warm temperateBm11
95516.57941mid-subtropicsBm0.5213512.85562warm temperateBm8
95516.57941mid-subtropicsBm1213012.85562warm temperateBm9
95516.57941mid-subtropicsBm0.5212512.85562warm temperateBm12
95516.57941mid-subtropicsBm1213012.85562warm temperateBm13
95516.57941mid-subtropicsBm1213512.85562warm temperateBm10
95516.57941mid-subtropicsBm0.5213012.85562warm temperateBpier10
155012.76351warm temperateBm3.5[45]213512.85562warm temperateBpier9
154512.76351warm temperateBm5.5213012.85562warm temperateBpier9
155512.76351warm temperateBm4213012.85562warm temperateBpier11
155012.76351warm temperateBm5.5214512.45761warm temperateBpier7
155012.76351warm temperateBm7213012.45761warm temperateBpier11
154512.76351warm temperateBm6.5214012.45761warm temperateBpier13
155012.76351warm temperateBm5.5213512.45761warm temperateBpier9
154512.76351warm temperateBm8164512.45761warm temperateBpier5
154512.76351warm temperateBm8.5164512.45761warm temperateBpier7
154512.76351warm temperateBm10.5164512.45761warm temperateBpier4
155012.76351warm temperateBm12164512.45761warm temperateBpier13
154512.76351warm temperateBm11.5164512.45761warm temperateBpier9
154512.76351warm temperateBm8.5164512.45761warm temperateBpier10
155012.76351warm temperateBm6.5145012.45761warm temperateBm6
155012.76351warm temperateBm3.5145512.45761warm temperateBm7
154512.76351warm temperateBm5.5145512.45761warm temperateBm8
155012.76351warm temperateBm4144512.45761warm temperateBm6
154512.76351warm temperateBm5.5144512.45761warm temperateBm7
154512.76351warm temperateBm3.5144512.45761warm temperateBm10
155012.76351warm temperateBm6.5145512.45761warm temperateBpier11
153512.76351warm temperateBpier5145512.45761warm temperateBpier5
153512.76351warm temperateBpier3145512.45761warm temperateBpier9
154012.76351warm temperateBpier5.5125512.45761warm temperateBpier6
154012.76351warm temperateBpier6125512.45761warm temperateBpier7
153512.76351warm temperateBpier3.5125512.45761warm temperateBpier8
154012.76351warm temperateBpier3125512.45761warm temperateBpier6
2025155761warm temperateAr6[46]125512.45761warm temperateBpier9
2030155761warm temperateAr6125012.45761warm temperateBpier7
2025155761warm temperateAr6125512.45761warm temperateBpier8
314517.58042mid-subtropicsBm11.3[47]125512.45761warm temperateBpier6
313517.58042mid-subtropicsBm10.895512.75461warm temperateBpier2
314017.58042mid-subtropicsBm10.795512.75461warm temperateBpier1
314017.58042mid-subtropicsBm8.795512.75461warm temperateBpier2
315517.58042mid-subtropicsBpier12.395512.75461warm temperateBpier1
315017.58042mid-subtropicsBpier10.395512.75461warm temperateBpier1
312517.58042mid-subtropicsBm13.795512.75461warm temperateBpier1
314517.58042mid-subtropicsBm11.575512.75661warm temperateBpier2
315017.58042mid-subtropicsBm7.775012.75661warm temperateBpier3
314517.58042mid-subtropicsBm11.375512.75661warm temperateBpier2
314017.58042mid-subtropicsAr1175012.75662warm temperateBpier3
315017.58042mid-subtropicsAr11.375512.75662warm temperateBpier3
192515.37742north subtropicsBm2[48]75012.75662warm temperateBpier4[62]
192015.37742north subtropicsBm286012.45761warm temperateBpier2
192515.37742north subtropicsBm286012.45761warm temperateBpier3
192515.37742north subtropicsBm286012.45761warm temperateBpier2
192015.37742north subtropicsBm286012.45761warm temperateBpier3
192515.37742north subtropicsBm265512.75661warm temperateBpier5
192515.37742north subtropicsBm265512.75661warm temperateBpier4
192015.37742north subtropicsBm265512.75661warm temperateBpier5
192515.37742north subtropicsBm265512.75661warm temperateBpier5
192515.37742north subtropicsBm2182517.27851mid-subtropicsBm13[63]
192515.37742north subtropicsBm2183017.27851mid-subtropicsBm12
192015.37742north subtropicsBm2183017.27851mid-subtropicsBm15
383015.37742north subtropicsBm5181517.27851mid-subtropicsBm16
312515.37742north subtropicsAr5.5182517.27851mid-subtropicsBm16
213015.37742north subtropicsAr5182517.27851mid-subtropicsBm18
103022.17741south subtropicsBm10.5[49]182017.27851mid-subtropicsBm17
105022.17741south subtropicsBm7.5182017.27851mid-subtropicsBm20
283018.17942mid-subtropicsAr7 [50]182017.27851mid-subtropicsBm18
282518.17942mid-subtropicsAr251550137151warm temperateBm1.96[64]
282518.17942mid-subtropicsAr26.51550137151warm temperateBm1.71
374511.36552warm temperateAr3.6[51]1545137151warm temperateBm1.51
374511.36552warm temperateAr4.51545137151warm temperateBm1.5
374011.36552warm temperateAr3.71550137151warm temperateBm1.39
374011.36552warm temperateAr3.21540137151warm temperateBm1.88
373511.36552warm temperateAr3.335259.55162mid temperate zoneBm49.87[65]
374511.36552warm temperateAr4.435309.55162mid temperate zoneBm45.46
103015.37741north subtropicsBm6.5[52]35259.55162mid temperate zoneBm45.99
102515.37741north subtropicsBm535259.55162mid temperate zoneBm49.79
103015.37741north subtropicsBm7.51325247851edge tropicsBpier33.11[66]
103015.37741north subtropicsBm11340238051edge tropicsBm24.75
101515.37741north subtropicsBm31250238051edge tropicsBm17.02
101515.37741north subtropicsBpier61150248051edge tropicsBm15.2
103515.37741north subtropicsBpier3.51125238051edge tropicsBpla21.14
102515.37741north subtropicsBpier5.51340268051edge tropicsBm25.69
104015.37741north subtropicsBpier21350268051edge tropicsBm21.36
103015.37741north subtropicsBpier51040228651edge tropicsBm7.1
593512.76262warm temperateBm47[53]1540268051edge tropicsBm21
593012.76262warm temperateBm42930268051edge tropicsBpier31
593012.76262warm temperateBm351430268251edge tropicsBpier15.7
151525.97851edge tropicsBm7[54]1125268251edge tropicsBpier17
152025.97851edge tropicsBm7.51330268251edge tropicsBpier11.2
152525.97851edge tropicsBm6130228251edge tropicsBpier3
1125136562warm temperateBm9.7[55]1330267851edge tropicsBpier35.6
75521.88361edge tropicsBm1.32[56]3020137151warm temperateBm10[67]
75021.88361edge tropicsBm3.183015137151warm temperateBm34
75521.88361edge tropicsBm1.393015137151warm temperateBm22
75021.88361edge tropicsBm0.62285516.17151north subtropicsBm7[68]
74021.88361edge tropicsBm2.19286016.17151north subtropicsBm7.5
74021.88361edge tropicsBm1.01283516.17151north subtropicsBpier23
1 “Ar” represents arch ring; “Bm” represents beam; “Bpier” represents bridge pier; “Bpla” represents bridge platform.


  1. Ahmad, S. Reinforcement corrosion in concrete structures, its monitoring and service life prediction—A review. Cem. Concr. Compos. 2003, 25, 459–471. [Google Scholar] [CrossRef]
  2. Liu, B.; Qin, J.; Shi, J.; Jiang, J.; Wu, X.; He, Z. New perspectives on utilization of CO2 sequestration technologies in cement-based materials. Constr. Build. Mater. 2020, 121660. [Google Scholar] [CrossRef]
  3. Papadakis, V.G.; Vayenas, C.G. Experimental investigation and mathematical modeling of the concrete carbonation problem. Chem. Eng. Sci. 1991, 46, 1333–1338. [Google Scholar] [CrossRef]
  4. Saetta, A.V. The carbonation of concrete and the mechanisms of moisture, heat and carbon dioxide flow through porous materials. Cem. Concr. Res. 1993, 23, 761–772. [Google Scholar] [CrossRef]
  5. Ta, V.L.; Bonnet, S.; Kiesse, T.S.; Ventura, A. A new meta-model to calculate carbonation front depth within concrete structures. Constr. Build. Mater. 2016, 129, 172–181. [Google Scholar] [CrossRef] [Green Version]
  6. Salehi, H.; Burgueno, R. Emerging artificial intelligence methods in structural engineering. Eng. Struct. 2018, 171, 170–189. [Google Scholar] [CrossRef]
  7. Stone, J.; Blockley, D.; Pilsworth, B. Towards machine learning from case histories. Civ. Eng. Syst. 1989, 6, 129–135. [Google Scholar] [CrossRef]
  8. Liu, C.; Liu, J.; Liu, L. Study on the damage identification of long-span cable-stayed bridge based on support vector machine. In Proceedings of the International Conference on Information Engineering and Computer Science, Wuhan, China, 19–20 December 2009. [Google Scholar]
  9. Figueiredo, E.; Park, G.; Farrar, C.R.; Worden, K.; Figueiras, J. Machine learning algorithms for damage detection under operational and environmental variability. Struct. Health Monit. 2011, 10, 559–572. [Google Scholar] [CrossRef]
  10. Bartram, G.; Mahadevan, S. System Modeling for SHM Using Dynamic Bayesian Networks; Infotech Aerospace: Garden Grove, CA, USA, 2012. [Google Scholar]
  11. Son, H.; Kim, C. Automated color model–based concrete detection in construction-site images by using machine learning algorithms. J. Comput. Civ. Eng. 2011, 26, 421–433. [Google Scholar] [CrossRef]
  12. Dai, H.; Zhao, W.; Wang, W.; Cao, Z. An improved radial basis function network for structural reliability analysis. J. Mech. Sci. Technol. 2011, 25, 2151–2159. [Google Scholar] [CrossRef]
  13. Lu, N.; Noori, M.; Liu, Y. Fatigue reliability assessment of welded steel bridge decks under stochastic truck loads via machine learning. J. Bridge Eng. 2016, 22, 1–12. [Google Scholar] [CrossRef]
  14. Oh, C.K. Bayesian Learning for Earthquake Engineering Applications and Structural Health Monitoring. Doctor’s Thesis, California Institute of Technology, Pasadena, CA, USA, 2008. [Google Scholar]
  15. Alimoradi, A.; Beck, J.L. Machine-learning methods for earthquake ground motion analysis and simulation. J. Eng. Mech. 2015, 141, 04014147. [Google Scholar] [CrossRef]
  16. Rafiei, M.H.; Adeli, H. NEEWS: A novel earthquake early warning model using neural dynamic classification and neural dynamic optimization. Soil Dyn. Earthq. Eng. 2017, 100, 417–427. [Google Scholar] [CrossRef]
  17. Xu, C.; Yun, S.; Shu, Y. Concrete strength inspection conversion model based on SVM. J. Luoyang Inst. Sci. Technol. Nat. Sci. Ed. 2008, 2, 84–86. (In Chinese) [Google Scholar]
  18. Chen, B.; Guo, X.; Liu, G. Prediction of concrete properties based on rough sets and support vector machine method. J. Hydroelectr. Eng. 2011, 6, 251–257. [Google Scholar]
  19. Chen, B.T.; Chang, T.P.; Shih, J.Y.; Wang, J.J. Estimation of exposed temperature for fire-damaged concrete using support vector machine. Comput. Mater. Sci. 2009, 44, 913–920. [Google Scholar] [CrossRef]
  20. Aiyer, B.G.; Kim, D.; Karingattikkal, N.; Samui, P.; Rao, P.R. Prediction of compressive strength of self-compacting concrete using least square support vector machine and relevance vector machine. KSCE J. Civ. Eng. 2014, 18, 1753–1758. [Google Scholar] [CrossRef]
  21. Topçu, İ.B.; Sarıdemir, M. Prediction of compressive strength of concrete containing fly ash using artificial neural networks and fuzzy logic. Comput. Mater. Sci. 2008, 41, 305–311. [Google Scholar] [CrossRef]
  22. Bilim, C.; Atiş, C.D.; Tanyildizi, H.; Karahan, O. Predicting the compressive strength of ground granulated blast furnace slag concrete using artificial neural network. Adv. Eng. Softw. 2009, 40, 334–340. [Google Scholar] [CrossRef]
  23. Sarıdemir, M.; Topçu, İ.B.; Özcan, F.; Severcan, M.H. Prediction of long-term effects of GGBFS on compressive strength of concrete by artificial neural networks and fuzzy logic. Constr. Build. Mater. 2009, 23, 1279–1286. [Google Scholar] [CrossRef]
  24. Golafshani, E.M.; Behnood, A.; Arashpour, M. Predicting the compressive strength of normal and High-Performance Concretes using ANN and ANFIS hybridized with Grey Wolf Optimizer. Constr. Build. Mater. 2020, 232, 117266. [Google Scholar] [CrossRef]
  25. Kandiri, A.; Golafshani, E.M.; Behnood, A. Estimation of the compressive strength of concretes containing ground granulated blast furnace slag using hybridized multi-objective ANN and salp swarm algorithm. Constr. Build. Mater. 2020, 248, 118676. [Google Scholar] [CrossRef]
  26. Han, Q.; Gui, C.; Xu, J.; Lacidogna, G. A generalized method to predict the compressive strength of high-performance concrete by improved random forest algorithm. Constr. Build. Mater. 2019, 226, 734–742. [Google Scholar] [CrossRef]
  27. Čehovin, L.; Bosnić, Z. Empirical evaluation of feature selection methods in classification. Intell. Data Anal. 2010, 14, 265–281. [Google Scholar] [CrossRef] [Green Version]
  28. Lunetta, K.L.; Hayward, L.B.; Segal, J.; Van Eerdewegh, P. Screening large-scale association study data: Exploiting interactions using random forests. BMC Genet. 2004, 5, 1–13. [Google Scholar] [CrossRef] [Green Version]
  29. Ma, J.; Li, S.; Qin, H.; Hao, A. Adaptive appearance modeling via hierarchical entropy analysis over multi-type features. Pattern Recognit. 2019, 98, 107059. [Google Scholar] [CrossRef]
  30. Domínguez-Jiménez, J.A.; Campo-Landines, K.C.; Martínez-Santos, J.C.; Delahoz, E.J.; Contreras-Ortiz, S.H. A machine learning model for emotion recognition from physiological signals. Biomed. Signal Process. Control 2020, 65, 101646. [Google Scholar] [CrossRef]
  31. Choubin, B.; Abdolshahnejad, M.; Moradi, E.; Querol, X.; Mosavi, A.; Shamshirband, S.; Ghamisi, P. Spatial hazard assessment of the PM10 using machine learning models in Barcelona. Sci. Total Environ. 2020, 701, 134474. [Google Scholar] [CrossRef]
  32. Zhang, C.; Cao, L.; Romagnoli, A. On the feature engineering of building energy data mining. Sustain. Cities Soc. 2018, 39, 508–518. [Google Scholar] [CrossRef]
  33. Yuan, P.; Lin, D.; Wang, Z. Coal consumption prediction model of space heating with feature selection for rural residences in severe cold area in China. Sustain. Cities Soc. 2019, 50, 101643. [Google Scholar] [CrossRef]
  34. Zheng, L.; Cheng, H.; Huo, L.; Song, G. Monitor concrete moisture level using percussion and machine learning. Constr. Build. Mater. 2019, 229, 117077. [Google Scholar] [CrossRef]
  35. Azimi-Pour, M.; Eskandari-Naddaf, H.; Pakzad, A. Linear and non-linear SVM prediction for fresh properties and compressive strength of high volume fly ash self-compacting concrete. Constr. Build. Mater. 2020, 230, 117021. [Google Scholar] [CrossRef]
  36. Feng, D.C.; Liu, Z.T.; Wang, X.D.; Chen, Y.; Chang, J.Q.; Wei, D.F.; Jiang, Z.M. Machine learning-based compressive strength prediction for concrete: An adaptive boosting approach. Constr. Build. Mater. 2019, 230, 117000. [Google Scholar] [CrossRef]
  37. Wu, X.; Kumar, V.; Quinlan, J.R.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Philip, S.Y.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 2008, 14, 1–37. [Google Scholar] [CrossRef] [Green Version]
  38. Hu, T. Inspection and Evaluation of Reinforced Concrete Arch Bridges. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2012. (In Chinese). [Google Scholar]
  39. Li, C. Research on Inspection, Evaluation and Reinforcement Technology of Existing Reinforced Concrete Arch Bridges. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2011. (In Chinese). [Google Scholar]
  40. Zhang, J. Study on Durability Evaluation Technology of In-Service Hyperbolic Arch Bridge. Master’s Thesis, Shandong University of Science and Technology, Qingdao, China, 2007. (In Chinese). [Google Scholar]
  41. Hu, J.; Wu, J.; Tang, J. Inspection and strengthening of a reinforced concrete truss arch bridge. J. Suzhou Univ. Sci. Technol. 2017, 30, 66–68. (In Chinese) [Google Scholar]
  42. Zhang, J. Research on Health Inspection and Reinforcement Technology of Existing Concrete Bridges. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2013. (In Chinese). [Google Scholar]
  43. Yu, H. Inspection and Evaluation of Existing Bridges. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2010. (In Chinese). [Google Scholar]
  44. Lu, X. Detection and Evaluation of Bearing Capacity of Existing Concrete Bridges. Master’s Thesis, Southwest Jiaotong University, Chengdu, China, 2016. (In Chinese). [Google Scholar]
  45. Ding, Y. Durability Evaluation of Qilin High Speed Concrete Bridges in Service and Analysis of Seismic Capability after Deterioration. Master’s Thesis, Xian University of Architecture and Technology, Xian, China, 2017. [Google Scholar]
  46. Liu, D. Analysis of Bearing Capacity of Existing Reinforced Concrete Arch Bridges. Master’s Thesis, Harbin Institute of Technology, Harbin, China, 2016. (In Chinese). [Google Scholar]
  47. Hou, H. Performance Assessment of Existing Concrete Bridge. Master’s Thesis, Chongqing Jiaotong University, Chongqing, China, 2013. (In Chinese). [Google Scholar]
  48. Sun, T. Study on Detection and Reinforcement of Old Bridges in Guiyang city. Master’s Thesis, Tianjin University, Tianjin, China, 2004. (In Chinese). [Google Scholar]
  49. Liu, J. Durability Assessment and Prediction of Concrete Bridges. Master’s Thesis, Hunan University, Changsha, China, 2014. (In Chinese). [Google Scholar]
  50. Zhong, H. Research on Durability Test and Reliability Evaluation of Existing Reinforced Concrete Bridge Members. Master’s Thesis, Changsha University of Science and Technology, Changsha, China, 2004. (In Chinese). [Google Scholar]
  51. Han, L. Study on Durability Evaluation and Maintenance Scheme of Reinforced Concrete Bridge. Master’s Thesis, Dalian University of Technology, Dalian, China, 2007. (In Chinese). [Google Scholar]
  52. An, Z. Research and Application of Bridge Inspection and Evaluation. Master’s Thesis, Guizhou University, Guiyang, China, 2009. (In Chinese). [Google Scholar]
  53. Zhang, J. Reliability Analysis and Residual Life of RC Bridges in Service. Master’s Thesis, Hebei University of Technology, Tianjin, China, 2014. (In Chinese). [Google Scholar]
  54. Xie, L. Durability Evaluation and Residual Life Prediction of In-Service Reinforced Concrete Bridges. Master’s Thesis, Wuhan University of Technology, Wuhan, China, 2009. (In Chinese). [Google Scholar]
  55. Ren, F. Study on Durability Evaluation of Reinforced Concrete Bridges. Master’s Thesis, Shandong Polytechnic University, Jinan, China, 2000. (In Chinese). [Google Scholar]
  56. Gao, X. Evaluation and Improvement of Carbonation Durability for Concrete Bridges. Master’s Thesis, Dalian University of Technology, Dalian, China, 2018. (In Chinese). [Google Scholar]
  57. Jiang, Y. Study on Structural Performance Evaluation and Reinforcement Technology of a Highway Hyperbolic Arch Bridge. Master’s Thesis, South China University of Technology, Guangzhou, China, 2009. (In Chinese). [Google Scholar]
  58. Zhang, B. Research on the Durability Evaluation Method of Reinforced Concrete Girder Structure Based on Fuzzy Theory. Master’s Thesis, Jilin University, Changchun, China, 2015. (In Chinese). [Google Scholar]
  59. Cai, E. Study on a New Comprehensive Evaluation Method for Durability of Existing Reinforced Concrete Bridges. Master’s Thesis, Tianjin University, Tianjin, China, 2007. (In Chinese). [Google Scholar]
  60. Dong, C. Fuzzy Comprehensive Evaluation of Concrete Bridge Durability. Master’s Thesis, Wuhan University of Technology, Wuhan, China, 2004. (In Chinese). [Google Scholar]
  61. Zhang, B. Technical Research on Inspection and Evaluation of Qinghong Bridge. Master’s Thesis, Liaoning Technical University, Fuxin, China, 2005. (In Chinese). [Google Scholar]
  62. Bao, Q. Research on the Durability of Reinforced Concrete Bridges in Beijing. Master’s Thesis, Beijing University of Technology, Beijing, China, 2003. (In Chinese). [Google Scholar]
  63. Guo, D. Prediction of Performance Degradation and Residual Service Life of In-Service R,C, Bridges in Coastal Areas. Master’s Thesis, Zhejiang University, Zhejiang, China, 2014. (In Chinese). [Google Scholar]
  64. Li, F. Analysis on the Present Situation of Prestressed Concrete Bridge Structure in Service and Evaluation of Residual Bearing Capacity. Master’s Thesis, Qingdao University of Technological, Qingdao, China, 2010. (In Chinese). [Google Scholar]
  65. Zhang, C. Study on Durability Test of Hongtu Bridge. Master’s Thesis, Northeastern University, Shenyang, China, 2005. (In Chinese). [Google Scholar]
  66. Dai, Y. Damage Analysis and Reinforcement Technology of Small and Medium-Sized Bridges in Hainan Province. Master’s Thesis, Southeast University, Nanjing, China, 2012. (In Chinese). [Google Scholar]
  67. Zhao, L. Detection and Evaluation of Durability of R,C, Bridge under Marine Environment. Master’s Thesis, Qingdao Technological University, Qingdao, China, 2010. (In Chinese). [Google Scholar]
  68. Su, Y. Reinforcement Based on Diseases Detection with Analysis in Bridge of Huaihe River Bridge. Master’s Thesis, AnHui University of Science and Technology, Huainan, China, 2013. (In Chinese). [Google Scholar]
  69. College of Urban and Environmental Science, Peking University. Climatic zoning map of China; Geographic Data Sharing Infrastructure. Available online: (accessed on 20 February 2021).
  70. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  71. Poul, A.K.; Shourian, M.; Ebrahimi, H. A Comparative Study of MLR, KNN, ANN and ANFIS Models with Wavelet Transform in Monthly Stream Flow Prediction. Water Resour. Manag. 2019, 33, 2907–2923. [Google Scholar] [CrossRef]
  72. Rafiei, M.H.; Adeli, H. A novel machine learning-based algorithm to detect damage in high-rise building structures. Struct. Des. Tall Spec. Build. 2017, 26, e1400. [Google Scholar] [CrossRef]
  73. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
  74. Hu, G.; Liu, L.; Tao, D.; Song, J.; Tse, K.T.; Kwok, K.C.S. Deep learning-based investigation of wind pressures on tall building under interference effects. J. Wind Eng. Ind. Aerodyn. 2020, 201, 104138. [Google Scholar] [CrossRef]
  75. Pei, L.; Sun, Z.; Yu, T.; Li, W.; Hao, X.; Hu, Y.; Yang, C. Pavement aggregate shape classification based on extreme gradient boosting. Constr. Build. Mater. 2020, 256, 119356. [Google Scholar] [CrossRef]
  76. Dong, E.; Li, C.; Li, L.; Du, S.; Belkacem, A.N.; Chen, C. Classification of multi-class motor imagery with a novel hierarchical SVM algorithm for brain-computer interfaces. Med. Biol. Eng. Comput. 2017, 55, 1809–1818. [Google Scholar] [CrossRef] [PubMed]
  77. López, J.; Maldonado, S. Multi-class second-order cone programming support vector machines. Inf. Sci. 2016, 330, 328–341. [Google Scholar] [CrossRef]
  78. Zhang, R.; Huang, G.B.; Sundararajan, N.; Saratchandran, P. Multicategory Classification Using an Extreme Learning Machine for Microarray Gene Expression Cancer Diagnosis. IEEE/ACM Trans. Comput. Biol. Bioinform. 2007, 4, 485–495. [Google Scholar] [CrossRef] [PubMed]
  79. Burges, C.J.C. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
Figure 1. The distribution of the bridges involved in this study.
Figure 1. The distribution of the bridges involved in this study.
Crystals 11 00210 g001
Figure 2. Importance of each parameter.
Figure 2. Importance of each parameter.
Crystals 11 00210 g002
Figure 3. Histogram of neutral depth of concrete components of the Nanjing Yangtze river’s bridge.
Figure 3. Histogram of neutral depth of concrete components of the Nanjing Yangtze river’s bridge.
Crystals 11 00210 g003
Figure 4. An illustration of support vector machine (SVM).
Figure 4. An illustration of support vector machine (SVM).
Crystals 11 00210 g004
Figure 5. An illustration of the effect of kernel function.
Figure 5. An illustration of the effect of kernel function.
Crystals 11 00210 g005
Figure 6. An illustration of K-nearest neighbor (KNN).
Figure 6. An illustration of K-nearest neighbor (KNN).
Crystals 11 00210 g006
Figure 7. Accuracy of models on different neutralization levels of concrete.
Figure 7. Accuracy of models on different neutralization levels of concrete.
Crystals 11 00210 g007
Figure 8. The accuracy of models in terms of the service time of concrete.
Figure 8. The accuracy of models in terms of the service time of concrete.
Crystals 11 00210 g008
Figure 9. The accuracy of models in terms of the compressive strength of concrete.
Figure 9. The accuracy of models in terms of the compressive strength of concrete.
Crystals 11 00210 g009
Figure 10. The accuracy of models in terms of the temperature of exposure situations.
Figure 10. The accuracy of models in terms of the temperature of exposure situations.
Crystals 11 00210 g010
Figure 11. The accuracy of models in terms of the relative humidity of exposure situations.
Figure 11. The accuracy of models in terms of the relative humidity of exposure situations.
Crystals 11 00210 g011
Table 1. Detailed information of the dataset.
Table 1. Detailed information of the dataset.
AgeAgeyears15920.79 Crystals 11 00210 i001
Compressive strengthfMpa156038.97 Crystals 11 00210 i002
Temperaturet4.22615.26 Crystals 11 00210 i003
HumidityRH%518671.22 Crystals 11 00210 i004
Acid rainpH-pH < 4.6, pH < 5.6, pH > 5.6. Crystals 11 00210 i005
Load levelp-Level 1, level 2. Crystals 11 00210 i006
Location of bridge componentsLoc-Arch ring, beam, bridge pier, bridge platform. Crystals 11 00210 i007
ClimateClimate-North subtropics, south subtropics, edge tropics, warm temperate, mid temperate zone, mid-subtropics. Crystals 11 00210 i008
Table 2. The classification of neutralization depth.
Table 2. The classification of neutralization depth.
LevelNeutralization Depth
Slight level<6 mm
Medium level6mm ≤ d < 25 mm
Serious level≥25 mm
Table 3. Results of mesh search tuning for SVM models.
Table 3. Results of mesh search tuning for SVM models.
Training AccuracyVerification AccuracyMacro Recall RateParameters 4
SVM(rbf 1)939186 C = 8 ,   γ = 1.5
SVM(tanh 2)696946 C = 0.01 ,   γ = 10 ,   c = 0.1  
SVM(poly 3)948884 C = 10 ,   γ = 1 ,   c = 1 ,   d e g = 4
KNN929084 k = 1
AdaBoost948784 M = 5 ,   l e a r n i n g   r a t e = 0.5
XGBoost948776 M = 60 ,   l e a r n i n g   r a t e = 0.5
1 “rbf” represents radial basis kernel [70].   K ( u , v ) = e γ | u v | 2 . 2 “tanh” represents hyperbolic tangent kernel [79].   K ( u , v ) = tanh ( γ · u · v + c ) . 3 “poly” represents polynomial kernel [79].   K ( u , v ) = ( γ · u · v + c ) d e g . 4 Parameters C ,   k and M in this column are explained in Section 4.1, Section 4.2, Section 4.3 and Section 4.4.
Table 4. The obfuscation matrixes of models.
Table 4. The obfuscation matrixes of models.
L1 1676060130658071206112063100
L2 245312533075101047145405530
L3 3013130013013013022
1 L1 represents the slight level. 2 L2 represents the medium level. 3 L3 represents the serious level.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duan, K.; Cao, S.; Li, J.; Xu, C. Prediction of Neutralization Depth of R.C. Bridges Using Machine Learning Methods. Crystals 2021, 11, 210.

AMA Style

Duan K, Cao S, Li J, Xu C. Prediction of Neutralization Depth of R.C. Bridges Using Machine Learning Methods. Crystals. 2021; 11(2):210.

Chicago/Turabian Style

Duan, Kangkang, Shuangyin Cao, Jinbao Li, and Chongfa Xu. 2021. "Prediction of Neutralization Depth of R.C. Bridges Using Machine Learning Methods" Crystals 11, no. 2: 210.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop