Next Article in Journal
Improved Method for Oriented Waste Detection
Previous Article in Journal
Fejér-Type Midpoint and Trapezoidal Inequalities for the Operator ω1,ω2-Preinvex Functions
Previous Article in Special Issue
Linear Diophantine Fuzzy Fairly Averaging Operator for Suitable Biomedical Material Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decision-Support System for Estimating Resource Consumption in Bridge Construction Based on Machine Learning

by
Miljan Kovačević
1,*,
Nenad Ivanišević
2,
Dragan Stević
1,
Ljiljana Milić Marković
3,
Borko Bulajić
4,
Ljubo Marković
1 and
Nikola Gvozdović
5
1
Faculty of Technical Sciences, University of Pristina, Knjaza Milosa 7, 38220 Kosovska Mitrovica, Serbia
2
Faculty of Civil Engineering, University of Belgrade, Bulevar kralja Aleksandra 73, 11000 Belgrade, Serbia
3
Department of Transportation Engineering and Geotechnics, Faculty of Architecture, Civil Engineering and Geodesy, University of Banja Luka, 78000 Banja Luka, The Republic of Srpska, Bosnia and Herzegovina
4
Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, 21000 Novi Sad, Serbia
5
Master of Science, Univers d.o.o., 38220 Kosovska Mitrovica, Serbia
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(1), 19; https://doi.org/10.3390/axioms12010019
Submission received: 6 December 2022 / Revised: 16 December 2022 / Accepted: 19 December 2022 / Published: 24 December 2022
(This article belongs to the Special Issue Multiple-Criteria Decision Making II)

Abstract

:
The paper presents and analyzes the state-of-the-art machine learning techniques that can be applied as a decision-support system in the estimation of resource consumption in the construction of reinforced concrete and prestressed concrete road bridges. The formed database on the consumption of concrete in the construction of bridges, along with their project characteristics, was the basis for the formation of the assessment model. The models were built using information from 181 reinforced concrete bridges in the eastern and southern branches of Corridor X in Serbia, with a value of more than 100 million euros. The application of artificial neural network models (ANNs), models based on regression trees (RTs), models based on support vector machines (SVM), and Gaussian processes regression (GPR) were analyzed. The accuracy of each model is determined by multi-criterion evaluation against four accuracy criteria root mean square error (RMSE), mean absolute error (MAE), Pearson’s linear correlation coefficient (R), and mean absolute percentage error (MAPE). According to all established criteria, the model based on GPR demonstrated the greatest accuracy in calculating the concrete consumption of bridges. According to the study, using automatic relevance determination (ARD) covariance functions results in the most accurate and optimal models and also makes it possible to see how important each input variable is to the model’s accuracy.

1. Introduction

On the routes of transportation systems, bridges serve as crucial elements that connect people and facilitate a variety of economic activity. In use today are more than two million bridges globally [1]. Over time, both nationally and internationally, the construction of bridges has steadily increased. On the eastern and southern branches of highway Corridor X in Serbia, contracts for the construction of about 200 reinforced concrete bridges totaling more than 100 million euros were signed from 2009 to 2015 [2,3].
The current idea of how bridge construction is organized assumes that activities will be planned, and resources will be assessed in order to build the bridge at the best possible cost and with the best quality. In the early stages of project development, ensuring accurate resource assessment is crucial because it has a significant impact on all following stages. The creation of a model for such an assessment will serve as a foundation for further examination and assessment of the technological solution to the project from a techno-economic perspective.
It is essential information from the perspective of the project’s execution to create a model that will enable a quick and accurate assessment of costs and materials during the early stages of project preparation while taking into account the fundamental properties of future bridges.
Key components for the construction of bridges over the Nile River in Egypt were analyzed by Marcous et al. in 2001 [4], with 22 prestressed concrete bridges represented in the data collection. Material consumption was modeled using a neural network. The maximum span, type of contract, bearing structure of the bridge (box girder, I girder), static system, and method of bridge construction were all taken into consideration as input variables for the model. The weight of the prestressing steel and the volume of concrete were chosen as output variables. Neural network models with various numbers of neurons in the middle layer were examined. The model’s accuracy was demonstrated by an error size of 7.5% for the amount of concrete and 11.5% for the forecast of the amount of prestressing steel.
In 2001, Flyvbjerg and Skamris [5] conducted research on the use of several techniques for estimating the cost of traffic infrastructure facilities. The railway infrastructure facilities, tunnels, bridges, and roadways were examined. The total value of the analyzed projects was over USD 90 billion. The outcome showed a sizable percentage of error in calculating the construction costs of all the facilities. While the overall average error for all evaluated projects was 27.6%, the average percentage error in the estimation for bridges and tunnels was 33.8%.
A technique for calculating the expenses of constructing bridges and culverts using multiple regression analysis was developed by Mostafa in 2003 [6]. There were data for 54 bridges and 72 culverts in the database. The models that were built were given as prediction equations.
Fragkakis and others are working on models for estimating the costs of building a super-structure of bridges in 2010 [7]. Regression analysis and the bootstrap approach were both discussed. The research’s basis was data pertaining to the construction of 68 bridges on Greece’s Egnatia highway. Analysis was conducted on how the fundamental variables found during the preliminary phase affect costs. Ten-fold cross-validation was used for model evaluation. The maximum accuracy measured by the MAPE value using the mentioned models was 20%.
In another study in 2010 [8], Fragkakis and others proposed the use of computer-aided models for calculating the construction costs of prestressed concrete road bridges. Greece’s Egnatia highway bridges were analyzed once again. The volume of concrete and ribbed steel, two crucial components in the construction of bridge piers, were modeled using regression analysis. When calculating the volume of concrete, the values of the coefficient of determination R2 of 72% were achieved. The R2 coefficient of determination for estimating steel consumption was 85%.
In 2010 [9], Kim and others discussed the application of genetic algorithms and the case-based reasoning (CBR) method in the preliminary cost estimation of bridge construction. Costs were broken down and expressed per square foot. The model was developed using information from 216 prestressed bridges constructed in South Korea between 2000 and 2005. To quantify the accuracy attained when utilizing CBR, the mean absolute error rate (MAER) of 5.99% was obtained.
A technique for calculating construction costs early in a project was developed in 2011 by Arafa and Alqedra [10]. As a test model, an ANN was examined. The database contained information on 71 projects. The ground floor area, typical floor area, number of floors, number of pillars, method of foundation, number of elevators, and number of rooms were all taken into consideration as input variables. The model was constructed using a multi-layer perceptron model. The model’s attained accuracy in this study, as measured by the R2 coefficient of determination, was 97%.
Fragkakis and others modeled the consumption of steel and concrete for bridge foundations in 2011 to estimate expenses at the conceptual stage of the project [11]. The coefficient of determination R2, whose value surpassed 77% for the examined models, was used in the study to evaluate models using regression analysis.
In 2013 [12], Mohamid dealt with cost estimation during the conceptual stage of road construction projects, with 52 completed Saudi Arabian projects included in the statistics. The application of several regression models was examined. The volume of soil excavated, the final layer expressed per surface unit, the length of the road, and the width of the road were all examined as input factors. The results of the MAPE and R2 criteria, which were used to assess the model, fell between the ranges of 17.2% to 32.1% and 0.88 to 0.97, respectively.
A design cost-estimation research for bridge projects in North Carolina, USA, was carried out in 2013 by Hollar et al. [13]. Analyses of 461 projects’ costs from 2001 to 2009 were performed, and the study used 16 categorical variables to identify the major cost factors. For modeling, multiple linear regression was employed. The MAPE value of 42.7% was obtained as a measure of the model’s accuracy.
In order to estimate construction costs, Chou et al. used multiple regression, neural networks, and the CBR technique in 2015 [14]. The database used to examine the contract price for bridge construction contained information on 275 projects that were contracted in Taiwan during 2008 and 2009. Numerous input variables were examined. A ten-fold cross-validation procedure was employed to rate the model’s forecasting capability. Using MAPE criteria, models’ accuracy was assessed. The study found that, with a MAPE value of 13.09%, the neural network model performed best.
In 2015 [15], Fragkakis and others addressed the issue of culvert construction costs. The database used to create the model contained information on 104 culverts on the Greek Egnatia motorway. The research led to the definition of regression equations that calculate the amount of concrete and steel used per long meter of the culvert. Ten-fold cross-validation was used to test the models, and they were assessed using MAPE standards. The outcome demonstrated the estimation accuracy, which was 13.78% and 19.79%, respectively, for the concrete and steel used.
A neural network model was used by Marinelli et al. in 2015 [16] to develop a model for evaluating the material consumption of bridge superstructures, and 68 bridges completed in Greece provided the data used to train the model. A multi-layer perceptron neural network model was used. The models were evaluated using Pearson’s linear correlation coefficient R. The training, validation, and test data sets all produced similar results, with values of 0.99937, 0.99288, and 0.9973, correspondingly.
Antoniou et al. conducted an analysis of the actual construction costs and concrete and steel consumption for underpasses in 2015 [17], with 28 underpasses in Greece represented in the database. The study provided average material consumption and actual construction costs for such buildings, both given in terms of square meters.
In their investigation of the accuracy of artificial intelligence methods’ ability to estimate cost and time in building projects, Pesko et al. analyzed SVM and ANN models [18]. When calculating expenses, SVM demonstrated greater precision with a MAPE of 7.06% as opposed to the most accurate ANNs, which attained a precision of 25.38%. However, estimating the length of the work proved to be more challenging, and for SVM and ANNs, the best MAPEs were 22.77% and 26.26%, respectively.
A methodology for calculating the consumption of essential resources during the construction of road bridges was developed by Dimitriou and others in 2018 [19]. The use of neural network models and linear regression models was covered in the study. Concrete and steel consumption were modeled. The study made use of data from 68 Greek bridges. The accuracy of the model was evaluated using the MAPE and R2 values. Depending on the applied span structure, the MAPE value ranged from 11.48% to 16.12% when predicting the consumption of concrete alone for the superstructure of the bridge, whereas the values for R2 ranged from 0.979 to 0.995. The obtained MAPE and R2 values were 37% and 0.962, respectively, for concrete and 31% and 0.962 for steel in terms of consumption per column.
In 2021 [20], Kovačević et al. developed various models for estimating the cost of reinforced concrete (RC) and prestressed concrete (PC) bridges. The database for the completed 181 bridges on Serbia’s Corridor X included information about the project’s characteristics and the tender papers. A model based on Gaussian random processes was chosen as the research’s best option, and its accuracy, as measured by the MAPE criterion, was 10.86%.
Hanak and Vitkova, in their research in 2022, aimed to identify dispute factors and the related consequences of a case study involving the construction of a road to commercial premises [21]. The research explored a specific case study: a private road construction project implemented in the Czech Republic. The analysis consisted of identifying potential problems and discussing their implications. The problem was investigated on three levels: economic, technical, and legal.
In 2022, Tijanić et al. [22] investigated road construction projects on the territory of the Republic of Croatia that went over budget. Artificial neural networks were used to estimate costs at various stages of the project life cycle using data on construction costs gathered from previous projects. The database of roads built in Croatia was used during the modeling process. The general regression neural network (GRNN) achieved the best accuracy with a MAPE of 13% and a coefficient of determination of 0.9595.
Kovačević and Bulajić in 2022 [23] conducted research on the best model that can be used to forecast the consumption of prestressed steel in the construction of PC bridges using machine learning (ML) techniques. The database contained information on 75 completed pre-stressed bridges in Serbia’s Corridor X. According to the established criterion MAPE, the model’s achieved accuracy for estimating prestressed steel consumption per square meter of the bridge superstructure was 6.55%.
The basic hypothesis of the investigation is that in the early stages of project development, methods based on artificial intelligence can quickly and sufficiently accurately estimate the consumption of key materials in RC and PC road bridges based on the basic characteristics of future bridges and the database of historical data on contracted bridges on which the model would be trained.
Resource estimation in current practice is based mainly on expert estimation of construction resources and previous experience of experts, very often subjectively and with significant errors in estimation. The formed model for assessing the consumption of concrete in bridge construction in this research is analog to an expert assessment, with a significant difference in that the previous experience is incorporated in the model itself, and the process is fast, automated, objective, significantly more accurate, and cheaper.
As part of the research, a large number of models for estimating the consumption of concrete for their construction was developed, analyzed, and verified. The established database enables future resource estimates to be made systematically and on a scientific basis, and any expansion of the historical database could be easily incorporated into the model and increase the accuracy of the model.

2. Methods

The most modern ML methodologies are presented and examined in this research in order to measure the resource consumption of reinforced concrete and prestressed concrete road bridges. The use of artificial neural networks, an ensemble of s, models based on the method of support vectors, and Gaussian random processes were all examined. All models were analyzed in the Matlab programming environment.

2.1. MLP Neural Network

A multi-layer perceptron (MLP) is a feed-forward neural network made up of at least three layers of neurons: an input layer, a hidden layer, and an output layer. As illustrated in Figure 1, a three-layer MLP network with n inputs and one output is an example of an MLP.
The characteristics of the neurons and the type of activation function determine the properties of the network. Consider using the network as a universal approximator. Then, in order to approximate nonlinear relationships between input and output variables, it must apply nonlinear activation functions in the hidden layer. The use of the sigmoid activation function (1) in the hidden layer has produced excellent results in a sizable number of studies:
φ x = 1 1 + e x
The dimensions of the input vector affect how many input neurons there are, and the dimensions of the output vector affect how many output neurons there are. The number of inputs and outputs, the number of training pairs, the size of the noise in the training pairs, the complexity of the error function, the network architecture, and the training algorithm are just a few of the variables that affect the optimal number of neurons in the hidden layer neurons.
The optimal number of neurons for the hidden layer is still primarily determined by experimentation. In this situation, it is recommended [20,23,24] to start with a simple structure, such as a network with just one neuron in the hidden layer, before progressively increasing the number of neurons and evaluating the outcomes.
It is advised [20,23,24] to use the following equations to calculate the maximum number of hidden layer neurons [25]:
  N H 2 × N i + 1
  N H N s N i + 1
where N i denotes the number of inputs in the neural network, and N s is the number of instances used for training. The lower of the two determined numbers is adopted based on the obtained values for the number of neurons.
The main drawback of this approach is the sheer volume of networks that need to be tested and trained.
The optimal number of neurons in the hidden layer is important for the successful operation of the model because it directly influences the generalization properties [3]. Too many neurons will form a network with too many parameters, which will be prone to overfitting. On the other hand, a network with insufficient neurons cannot accurately approximate the given nonlinear relationships, and a problem with underfitting occurs.

2.2. Regression Trees Models

An example of supervised machine learning in which data are continuously divided based on particular parameter values is a RT. Nodes, branches, and leaves make up a tree. One variable from the problem being modeled is represented by each node. The appropriate branch is taken depending on the value in the node. Until the terminal leaves are reached, the process is repeated iteratively.
Trees divide the area or domain of input variables of the considered domain into several regions in order to apply a linear model to each of these regions instead of applying a single linear model to the entire domain.
The variables and their values that will be used to divide the nodes in the tree must be decided before the tree is created. The so-called greedy technique is applied in this study [26]. In this method, only the current division in the current iteration—which ought to have the minimum squared error relative to the target value—is taken into account. However, it is not taken into account if such a division is more optimal than other iterations, and as a result, it is referred to as greedy. In this context, the Equation (4) is used [26]:
min j ,   s min c 1 x i R 1 j ,   s y i c 1 2 + min c 2 x i R 2 j ,   s y i c 2 2
where c ^ 1 =   a v e r a g e y i   | x i R 1 j ,   s , and c ^ 2 =   a v e r a g e y i   | x i R 2 j ,   s . (see Appendix A).
In this case, the division is made into M regions R 1 , R 2 , , R M , and the RT model provides a prediction by assigning a point to a certain region depending on the values of the input variables and the values s   of the splitting in the nodes (Figure 2). In the assigned region, the considered point is assigned the value of the output variable, which is equal to the average value of the output variable of the samples for that region.
Individual RTs often represent base models that form ensembles of RTs (Figure 3). Bootstrap sampling is used to create a dataset for each tree in the ensemble. The procedure essentially involves sampling with replacement from the original dataset. As a result, the datasets of the same size as the original are formed for training individual ensemble models. When ensembles constructed in this way are used for regression, the mean value of the individual trees that make up the ensemble is taken for prediction for some test sample x (Figure 3).
When creating individual trees, a subset of m randomly selected input variables from p possible variables of the analyzed problem can be considered in the case of the random forest (RF) algorithm. Alternatively, an entire set of input variables can be considered in the case of the TreeBagger (TB) algorithm [26,27,28,29].
When using the boosted trees (BT) model [30,31,32], the initial RT adheres to the established limitations for the maximum number of splits of the tree (Figure 4). The following tree can be used to model differences once the gap between the target value and the value of the output variable provided by that model has been established. The preceding tree is then combined with a subsequent tree that modeled its residuals. The previous model is enhanced in this way. Typically, the values of differences or residuals are multiplied by a quantity smaller than one, known as the learning rate, rather than being fully modeled. The reason for the gradual modeling of the residuals is to prevent an overtraining of the model. Applying the aforementioned process repeatedly results in an ensemble of enhanced trees. BT models are developed sequentially as opposed to RF and TB models, which can be created in parallel.

2.3. Support Vector Machines for Regression (SVR)

The basic idea of the SVM method is the transformation of the input space and the use of the linear model in the transformed space.
Let us assume that we have a set of training data from which we may infer the input-output dependence, i.e., mapping f. The training data set D = x i , y i n x ,   i = 1 , , l contains l pairs x 1 , y 1 , x 2 , y 2 ,…, x l , y l , where inputs x are n -dimensional vectors x n , and outputs y are continuous values.
The approximation function [33] is analyzed by the SVM method and has the following form [33,34]:
f x , w = i = 1 N w i φ i x .
where the functions φ i x are referred to as attributes. Although it is not explicitly displayed, the bias b value is factored into the value of the weight vector w . The best way to understand the SVR approach by first examining linear regression [33]:
f x , w = w T x + b .
In order to build the model, it is necessary to define the so-called Vapnik’s error function [34], which is equal to zero if the difference between the predicted value f x , w and the observed value is less than some value ε (Figure 5).
The error function defined in this way can be defined as an ε cylinder or tube. If the prediction values are inside the tube, the error is zero. For all other prediction values outside the pipe, the error is equal to the amount of the discrepancy between the forecasted value and the diameter ε of the pipe [33]:
y f x , w ε = 0               i f                   y f x , w ε     y f x , w ε         o t h e r w i s e .
The empirical risk function is established to apply the SVR regression [33,34]:
R e m p ε w , b = 1 l i = 1 l y i w T x i b ε .  
The objective of the SVM algorithm for a regression problem (SVR) is to concurrently minimize the empirical risk R e m p ε and the w 2 value [33]. In order to find the hyperplane f x , w = w T x + b in linear regression, the following equation must be minimized:
R = 1 2 w 2 + C i = 1 l y i f x i , w ε   .
Equations for linear regression can be employed in the case of non-linear regression (Figure 5) by substituting K for the scalar product x i T x j .
The constant C is a penalty parameter that balances the approximation error and the norm of the weight vector || w ||. If the slack variables ξ i and ξ i *   (Figure 5) are introduced, the previous equation reduces to the following equation, which needs to be optimized [33]:
R w , ξ , ξ * = 1 2 | | w | | 2 + C i = 1 l ξ i + i = 1 l   ξ i * .
As a result of the optimization, Lagrange multipliers are acquired in the form of pairs ( α i , α i * ), and utilizing them, the optimal weight vector values are discovered by applying the equation [33]:
w = i = 1 l α i * α i x i   ,
as well as the optimal value for bias b 0 :
b 0 = 1 l i = 1 l y i x i T w 0 .
The optimal hyperplane is defined by the equation [33]:
z = f x , w = w T x + b = i = 1 l α i * α i x i T x + b .
Equations for linear regression can be employed in the case of non-linear regression (Figure 5) by substituting the scalar product x i T x j for K x i , x j = Φ x i T Φ x j .
For the case of non-linear regression, the same equations as for linear regression are obtained [33]:
w = i = 1 l α i * α i Φ x i ,
f x = i = 1 l α i * α i K x i , x + b .
The mapping function Φ need not be chosen explicitly when using the kernel K . A subset of training data vectors is linearly combined to obtain the optimal solution. We may train nonlinear models using essentially the same method as for linear SVM models by employing the so-called kernel trick.
In this study, the sequential minimal optimization (SMO) algorithm was used with the LIBSVM software for optimization [35,36,37]. LIBSVM software was used within the MATLAB 2020a program [38].

2.4. Gaussian Process for Regression (GPR)

When modeling an unknown function from the experimental data itself, the Gaussian process implies that each subset of these functions has a Gaussian normal distribution. A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian distribution [39].
GP for the regression problem (GPR) models solves a regression task to find a function that returns a real value, denoted as f: m ⟼ ℝ for a given data set in pairs ( x 1 ,   y 1 ), ( x 2 ,   y 2 ) ,…, ( x n ,   y n ), where x i denotes the input vector, and   y i represents the scalar value of the output. An input x can be thought of as a location where an unknown function f   (that stands for a random variable) following a normal distribution is approximated.
Let X represent the input vector values for the training set and f X represent the function values that correspond to those values. The input vector values for the test data set will be marked with X * and their corresponding function values with f X * . The lengths of the vectors X and X * are   n and n * , respectively.
If the values of the functions X and f X * are observed as realized values that follow the multidimensional Gaussian distribution, the following equation can be written:
f X f X * ~ N m X m X * ,   k X , X k X , X * k X , X * T k X * , X *
where m X stands for the mean vector for random variable f X when   x = X ,   m X * is the mean vector for random variable f X * , when x = X * , and k is the covariance function.
The covariance function, whose determination is a crucial step in the GPR application process, assesses how similar two different function values are at two different locations. The random variables f(xᵢ) and f(xⱼ) are correlated at separate locations, xᵢ and xⱼ, and the degree of correlation between f(xᵢ) and f(xⱼ) depends on the distance between xᵢ and xⱼ. The assumption is that the values of the functions corresponding to the points will be closer together the closer the points are. This correlation is modeled differently by using various covariance functions k , which has a substantial impact on the generalization of the model.
The following equation, for example, defines the exponential covariance function [35]:
k x , x = σ 2 e x p x x 2 2 l 2 .
In this equation, σ 2 stands for the signal variance, l stands for the so-called length scale parameter, while σ and l are model parameters. The term “prior” refers to the joint probability density over the random variables f X and f X * .
When modeling specific output values of an unknown function, the values of the output variables of the functions followed by noise are measured, and these values can be expressed as [39]:
y X = I n f X + ϵ *
where ϵ * ~ N 0 , η 2 I n   , η 2 is noise variance, and I n is the identity matrix of size n × n .
Given that there was noise used to monitor the experimental measurements, the marginal probability can be expressed as the following equation [39] by applying the Gaussian marginalization rule and the Gaussian transformation rule:
p y X = 1 2 π n 2 d e t   K + η 2 I n 1 2 e x p 1 2 y X m X T   K + η 2 I n 1 y X m X
Model parameters l ,     σ 2 ,   η 2 are derived by maximising the equation for p y X .
The equation for the joint distribution of f X * and y X , which denotes a multidimensional Gaussian distribution, is used to obtain the distribution for the posterior P f X * y X . The posterior equation f X * y X ~ N μ * , σ * 2 is obtained by applying the conditional rule to a multivariate Gaussian distribution, and its mean value and variance are given by the following equations [39]:
μ * = m X *   + k X *   ,   X k X , X + η 2 I n 1 y X m X σ * 2 = k X * , X * k X *   ,   X k X , X + η 2 I n 1 k X *   ,   X T
Given that y X * y X = I η * ( f X * y X ) + ϵ * , distribution for y X * y X may be obtained by applying the Gaussian linear transformation rule.

3. Dataset

The basis for forming a model for estimating the total amount of concrete (concrete of various classes) used in bridge construction is a suitable number of bridges with project documentation containing the technical specifications of the bridges. Data on 181 highway bridges built in the eastern and southern branches of Corridor X in Serbia (Figure 6) are included in the database for this study, including 104 bridges with RC in situ modifications and 77 bridges with PC upgrades (prefabricated or cast in situ). The accuracy of the created model depends critically on the choice of suitable predictors during modeling. The following variables were examined as independent variables in the model for estimating the total volume of concrete:
x 1 Average   bridge   span   x 2 Total bridge span length x 3 Bridge width
x 4 Type of construction x 5 Average pier height x 6 Foundation type
The dependent variable in the study is y —the total volume of concrete for the construction of the bridge (defined on the basis of project documentation).
To account for the bridges with several spans of varying lengths, it was required to introduce the variable average bridge span. The variable is particularly informative when, for instance, different bridges of identical overall length varied in the number of shorter and longer spans, with some having a bigger proportion of shorter spans than others.
The variable type of construction, which is assigned the value 1 for prefabricated monolithic bridges or the value 0 for bridges that are cast in formwork, includes the potential effects of the construction method. By using this variable, we may distinguish between situations where prefabricated girders can be put together using cranes and situations where the span structure must be cast in formwork. The variable foundation type, which is assigned the value 1 for all bridges with dominantly deep foundations and the value 0 for all bridges where the foundations are mostly shallow, captures the influence of the foundation type.
Knowing the statistical indicators of the data utilized (Table 1) as well as their mutual relations (Figure 7) is crucial when using various machine learning techniques to define prediction models. This is because the models that are created can generalize within the data that they were trained on.

Criteria for Assessing Model Accuracy

Ten-fold cross-validation was used in the process used to assess the model’s accuracy. In the scenario depicted in Figure 8, it is necessary to randomly divide the data set into ten disjoint subsets of the same size in order to use cross-validation (10 folds). The model is trained on nine folds; then, it is evaluated on the tenth. This procedure is repeated ten times, and the mean values are taken for the definitive score according to the appropriate accuracy criterion. The model’s accuracy was assessed in relation to four established accuracy criteria: the RMSE and MAE, which serve as the two absolute criteria expressed in the same units as the output variable, and the MAPE and Pearson’s linear correlation coefficient as the two relative criteria (R).
While the other criteria were employed for model evaluation, the mean squared error (MSE) represented by Equation (21) is the criterion used in this research for model calibration.
MSE = 1 N k = 1 N d k o k 2 .
where d k denotes the target values, o k represent the values predicted by the model, and N is the number of instances.
A reasonable criterion for avoiding substantial differences between the target and modeled values is the RMSE, which is defined by the Formula (22) as an absolute measure of the model’s accuracy:
RMSE = 1 N k = 1 N d k o k   2 .
A measure of the model’s absolute accuracy, the MAE, is defined by the equation (23) and is used to depict the model’s MAE. While RMSE offers greater relevance to smaller mistakes, MAE gives equal weight to errors of all target values:
MAE = 1 N k = 1 N d k o k   .
Pearson’s linear correlation coefficient (R), defined by the Equation (24), is a relative criterion for evaluating the model’s accuracy:
R = k = 1 N ( d k d ¯ ) o k o ¯ 2   k = 1 N d k d ¯ 2 o k o ¯ 2 1 .
where o ¯ denotes the mean value of the model prediction, and d ¯ represents the mean target value.
MAPE, defined by the Equation (25), is a relative measure of prediction accuracy and cannot be used if target values are equal to zero:
MAPE = 100 N k = 1 N d k o k   d k .

4. Results

The MLP neural network type was used to form a neural network model for forecasting the concrete consumption of bridges. For modeling, the architecture of a three-layer network with one input, hidden, and an output layer of neurons was analyzed. A neuron with a linear activation function was used in the output layer and neurons with a nonlinear activation function in the hidden layer. In addition, the application of the tansigmoid activation function was analyzed. An architecture with six neurons in the input layer and one neuron in the output layer was taken into consideration because in the specific problem of concrete consumption prediction, which is a regression problem, there must be one neuron in the output layer. All model variables were transformed before training. Variable scaling is applied to ensure the equality of all variables because the absolute size of the variable does not have to be equivalent to the actual influence of the variable.
In the paper, the variables were transformed into the interval [–1, 1], where the corresponding minimum value was mapped to –1 and the corresponding maximum value to 1, and linear scaling was used for the values that are in between. Numerous studies mentioned in Section 2.1 gave a recommendation for the number of neurons in the hidden layer   N H . It is proposed to examine architectures that have up to 12 neurons in the hidden layer, which is obtained by adopting a smaller value from Equations (2) and (3), which represents the value of the upper limit of the number of neurons of the hidden layer.
The LM algorithm [20] was used to train the model, and three criteria were used as stopping criteria: the maximum number of epochs, which is limited to 1000; the minimum gradient magnitude, which is limited to 10 5 ; and the network performance, which is evaluated using the MSE value and adjusted to a value of 0.
The training is interrupted at the moment when any of the aforementioned criteria is met. All architectures were evaluated by using four defined model-accuracy criteria, namely RMSE, MAE, R, and MAPE.
According to the three criteria MAE, R, and MAPE, the architecture of the MLP-ANN model with three neurons in the hidden layer has optimal values (Figure 9). On the other hand, the model with nine neurons is the optimal model based on the value of the RMSE criterion.
The optimization of parameter values was carried out for the following adjustable parameters when using the random forest and TreeBagger models:
  • Number of variables that are taken into account in the node when branching trees (Num of variables);
  • The minimum amount of data assigned to the terminal leaf when creating the tree model (Min leaf size).
Regarding the number of variables, models with the set of variables for splitting containing one variable or the full set of six variables were taken into consideration. There are recommendations to use one-third of all model variables for the random forest model, which is also the default value in the Matlab program. In this instance, several RF model variations were taken into account. These variations consisted of 500 base models, and during the analysis, subsets of variables with a maximum of five variables were taken into consideration. TreeBagger models were composed of 500 basis elements, and all model variables were included in the set of variables that might be divided. It is not necessary to scale the model variables when using these models. Figure 10 illustrates how the model’s considered hyperparameters affect the model’s accuracy in relation to four specified criteria.
The following variables were taken into account when using the boosted trees model:
  • The number of trees that make up the ensemble;
  • Maximum number of splits;
  • Learning parameter.
The ensemble for BT can only have a maximum of 100 base models due to the sequential nature of model training. The learning parameters under test have values of 0, 0.01, 0.10, 0.25, 0.75, and 1.00. The model was trained using the MSE criterion. The optimal model was determined to have the following hyperparameters (Figure 11): the number of trees, 7; the maximum number of splits, 8; and the learning parameter, 0.50. The optimal model was then assessed using the RMSE, MAE, R, and MAPE criteria.
Regarding the application of the SVM method, there is no universal recommendation at which kernel function gives optimal results. This study investigated the application of the following kernel functions: linear, RBF, and sigmoid. When using the SVM approach, scaling the model variables is advised. In this case, the model variables are converted into the range (0, 1). After scaling, the model was trained and tested. By the grid search procedure and the MSE criteria employed as the optimality criterion, the optimal kernel function parameters were (C = 1.3765 and ε = 0.0285 for the linear kernel; C = 76.1093; ε = 0.00058067; γ = 8.7242 for the RBF kernel; C = 649.6989; ε = 0.0299; γ = 0.0020 for sigmoid kernel). Following the selection of the optimal model, the accuracy was evaluated using the RMSE, MAE, R, and MAPE criteria.
The GPR model was applied, and various covariance functions were taken into account. The covariance functions convert the similarity between input vectors into the similarity between the functions or output variables that correspond to those vectors in the right manner. It is crucial to investigate the use of a greater variety of covariance functions because of this. The use of ten distinct covariance functions was investigated in this study.
Exponential, quadratic-exponential, Matern 3/2, Matern 5/2, and rational quadratic covariance functions were analyzed (Table 2) as well as their ARD variants with distinct parameters for each coordinate axis (Table 3). The analyses were performed on models with constant basis functions. The so-called z-score was used to scale the variable values prior to modeling. By maximizing the equation for the log marginal probability, the parameter values were derived.
The significance of each variable in the model can be established (Figure 12) using the model with the highest accuracy (Table 4) and the values of the length scale parameters. In this particular instance, the model with the ARD exponential covariance function will be the subject of this analysis.
By comparing the values of the length scale parameters to the parameters that describe the other variables, it can be seen that the variable x 4 , which represents the construction method in the model with the ARD exponential function, has the highest value of the length scale parameter (the importance of the variable is inversely proportional to the value of the length scale parameter) (Figure 12). The length of the bridge—variable x 2 —is the most important variable for the model; however, the other variables are also important for model accuracy.
In Table 5, a binary value (0 or 1) designates whether a certain variable is incorporated into the model or not.
By examining the preceding table (Table 5), it is clear that model 1 produced better results for the RMSE accuracy criteria, but these differences are insignificant for the MAE, R, and MAPE criteria.
Figure 13 compares the modeled values for the total amount of concrete used expressed in m 3 per bridge to the target values which are stated in the same unit.
The so-called regression diagram between the target and modeled values can be seen in Figure 14. Regression plots show how accurate a model is, and in an ideal regression plot, the points are situated on a line that is 45 degrees above or below the horizontal. Thus, it can be seen from Figure 14 analysis that the model has a good level of accuracy.

5. Conclusions

The primary hypothesis that in the early stages of project development, it is possible to quickly and sufficiently accurately estimate the consumption of concrete during their construction using methods based on artificial intelligence was confirmed by this research.
The developed models could be applied by investors, designers, consultants, contractors, and other interested parties. In order to make a decision about starting the construction of a reinforced concrete or prestressed road bridge, it is necessary for the preparatory phase of the investment project to carry out an early techno-economic analysis, which necessarily includes the planned resource consumption.
Therefore, as part of the research, a large number of predictive models were developed, analyzed, and verified for the early assessment of concrete consumption. At the same time, it is possible to estimate the costs of building bridges using an amount of concrete consumption.
A database of the consumption of reinforced concrete and the design characteristics of 181 reinforced concrete road bridges in the eastern and southern branches of Corridor X in Serbia, with a value of more than 100 million euros, was used for the creation of the assessment model. Models based on artificial neural networks, decision trees, the SVM method, and the GP process were among the machine learning models that were under consideration for application to the considered study.
All models were trained and tested under equal conditions using cross-validation, and measures of model accuracy included RMSE, MAE, R, and MAPE.
The results revealed that the GPR model’s accuracy is the highest when the ARD exponential function is used. In terms of accuracy criteria, the following results were obtained for the optimal model: RMSE = 325.7768; MAE = 159.2796; R = 0.9899; MAPE = 11.6461.
A suitable solution was found, as evidenced by the values of the attained correlation coefficient, which was 98.99%, and the mean absolute percentage error, which was 11.64%.
Because ARD covariance functions provide models with the highest accuracy, it is advised to build a model for estimating concrete consumption using the ARD exponential function of covariance. The suggested model makes it possible to rank the impact of different variables on the model’s accuracy. It is demonstrated that a model with approximately the same accuracy may be obtained by removing variables of minimal importance and utilizing a smaller collection of variables. Additionally, the model’s complexity is decreased, and the model-training process is expedited.
Based on everything presented, the proposed method for early cost estimation can be successfully used in practice as a support in the management of investment projects for the construction of reinforced and prestressed concrete road bridges.

6. Future Developments

The directions of further research must be aimed at expanding the existing base with subsequently obtained data related to the construction of other reinforced concrete and prestressed bridges.
The problem in this research was set as a problem of prediction of a continuous variable, that is, a regression problem. In the future, this problem can also be set as a classification problem (instead regression problem) in such a way that the concrete consumption of bridges would be assigned to a certain number of classes and then applied to ML algorithms for classification problems. After that, the accuracy of those models could be compared with the regression models presented in the paper.
The model implementation could be investigated for other markets for which there is a possibility of forming a suitable database. Furthermore, the applied methodology in the research, with certain adjustments, could also be applied to the cost estimation of other phases in the life cycle of the project and also to other types of construction facilities.
With a more significant expansion of the database for model building, the application of unsupervised learning methods can be investigated, where the data would be grouped into a certain number of clusters, and then, individual regression models instead of one integral model would be formed for each of the clusters.
The application of hybrid models of ensembles that would be simultaneously composed of models of ANNs, RTs, and models based on SVM could also be explored in future research.

Author Contributions

Conceptualization, M.K. and N.I.; methodology, M.K.; software, M.K.; validation, M.K., N.I., D.S., L.M.M., B.B., L.M., and N.G.; formal analysis, M.K.; investigation, M.K., N.I., D.S., L.M.M., B.B., L.M., and N.G.; resources, M.K., N.I., D.S., L.M.M., B.B., L.M., and N.G.; data curation, M.K., N.I., D.S., L.M.M., B.B., L.M., and N.G.; writing—original draft preparation, M.K; writing—review and editing, M.K.; funding acquisition, M.K., N.I., D.S., L.M.M., B.B., L.M., and N.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ANNArtificial neural network
RTRegression tree
SVMSupport vector machine
GPRGaussian processes regression
RMSERoot mean square error
MAEMean absolute error
RPearson’s linear correlation coefficient
MAPEMean absolute percentage error
ARDAutomatic relevance determination
R2Coefficient of determination
CBRCase-based reasoning
MAERMean absolute error rate
MLMachine learning
RCReinforced concrete
PCPrestressed concrete
BTBoosted trees
SVRSupport vector machines for regression
SMOSequential minimal optimization
GPGaussian proces

Appendix A

Table A1. List of notations.
Table A1. List of notations.
Notation Definition
φ x Sigmoid activation function
  N H Maximum number of neurons in hidden layer
N i Number of inputs in the neural network
N s Number of instances
x i ith value of variable x
y i ith value of variable y
c ^ i Mean absolute error
w Weight vector
b Bias
f x , w Approximation function
φ i x Attribute
ε ε cylinder value
R e m p ε Empirical risk
CPenalty parameter
ξ i   and   ξ i * Slack variables
Φ Mapping function
K Kernel function
m X Mean   vector   for   random   variable   f X
α i , α i * Lagrange multipliers
ϵ * Noise term
η 2 Noise variance
k x , x Covariance function
p y X Marginal probability
σ 2 Signal variance,
l Length scale parameter
I n Identity   matrix   of   size   n × n
μ * Posterior mean value
σ * 2 Posterior variance

References

  1. Pržulj, M. Mostovi; Udruženje “Izgradnja”: Beograd, Srbija, 2014. [Google Scholar]
  2. Kovačević, M.; Ivanišević, N.; Petronijević, P.; Despotović, V. Construction cost estimation of reinforced and prestressed concrete bridges using machine learning. Građevinar 2021, 73, 1–13. [Google Scholar] [CrossRef]
  3. Kovačević, M.M. Model za Prognozu I Procenu Troškova Izgradnje Armirano Betonskih Drumskih Mostova. Ph.D. Thesis, University of Belgrade, Faculty of Civil Engineering, Belgrade, Serbia, 2018. [Google Scholar] [CrossRef]
  4. Marcous, G.; Bakhoum, M.M.; Taha, M.A.; El-Said, M. Preliminary Quantity Estimate of Highway Bridges Using Neural Networks. In Procedings of the Sixth International Conference on the Application of Artificial Inteligence to Civil and Structural engineering, Stirling, Scotland, 19–21 September 2001. [Google Scholar]
  5. Flyvbjerg, B.; Skamris, H.; Buhl., S. Underestimating Costs in Public Works Projects: Error or Lie? J. Am. Plann. Assoc. 2002, 68, 279–295. [Google Scholar] [CrossRef] [Green Version]
  6. Mostafa, E.M. Cost Analysis for Bridge and Culvert. In Proceedings of the Seventh International Water Technology Conference IWTC7, Cairo, Egypt, 1–3 April 2003. [Google Scholar]
  7. Fragkakis, N.; Lambropoulos, S.; Pantouvakis, J.P. A cost estimate method for bridge superstructures using regression analysis and bootstrap. Organ. Technol. Manag. Constr. 2010, 2, 182–190. [Google Scholar]
  8. Fragkakis, N.; Lambropoulos, S.; Pantouvakis, J.P. A computer-aided conceptual cost estimating system for prestressed concrete road bridges. Int. J. Inf. Technol. Proj. Manag. 2014, 5, 1–13. [Google Scholar] [CrossRef] [Green Version]
  9. Kim, K.Y.; Kim, K. Preliminary Cost Estimation Model Using Case-Based Reasoning and Genetic Algorithms. J. Comput. Civ. Eng. 2010, 24, 499–505. [Google Scholar] [CrossRef]
  10. Arafa, M.; Alqedra, M. Early Stage Cost Estimation of Buildings Construction Projects using Artificial Neural Networks. J. Artif. Intell. 2011, 4, 63–75. [Google Scholar] [CrossRef] [Green Version]
  11. Fragkakis, N.; Lambropoulos, S.; Tsiambaos, G. Parametric Model for Conceptual Cost Estimation of Concrete Bridge Foundations. J. Infrastruct. Syst. 2011, 17, 66–74. [Google Scholar] [CrossRef]
  12. Mohamid, I. Conceptual Cost Estimate of Road Construction Projects in Saudi Arabia. Jordan J. Civ. Eng. 2013, 7, 285–294. [Google Scholar]
  13. Hollar, D.A.; Rasdorf, W.; Liu, M.; Hummer, J.E.; Arocho, I.; Hsiang, S.M. Preliminary engineering cost estimation model for bridge projects. J. Constr. Eng. Manag. 2013, 139, 1259–1267. [Google Scholar] [CrossRef]
  14. Chou, J.S.; Lin, C.W.; Pham, A.D.; Shao, J.D. Optimized artificial intelligence models for predicting project award price. Autom. Constr. 2015, 54, 106–115. [Google Scholar] [CrossRef]
  15. Fragkakis, N.; Marinelli, M.; Lambropoulos, S. Preliminary Cost Estimate Model for Culverts. Procedia Eng. 2015, 123, 153–161. [Google Scholar] [CrossRef]
  16. Marinelli, M.; Dimitriou, L.; Fragkakis, N.; Lambropoulos, S. Non-Parametric Bill of Quantities Estimation of Concrete Road Bridges Superstructure: An Artificial Neural Networks Approach. In Proceedings of the 31st Annual ARCOM Conference, Lincoln, UK, 7–9 September 2015. [Google Scholar]
  17. Antoniou, F.; Konstantinitis, D.; Aretoulis, G. Cost Analysis and Material Consumption of Highway Bridge Underpasses. In Proceedings of the Eighth International Conference on Construction in the 21st Century (CITC-8), Changing the Field: Recent Developments for the Future of Engineering and Construction, Thessaloniki, Greece, 27–30 May 2015. [Google Scholar]
  18. Peško, I.; Mučenski, V.; Šešlija, M.; Radović, N.; Vujkov, A.; Bibić, D.; Krklješ, M. Estimation of costs and durations of construction of urban roads using ANN and SVM. J. Complex. 2017, 2017, 2450370. [Google Scholar] [CrossRef] [Green Version]
  19. Dimitriou, L.; Marinelli, M. Early bill-of-quantities estimation of concrete road bridges—An artificial inteligence-based application. Public Work. Manag. Policy. 2018, 23, 127–149. [Google Scholar] [CrossRef] [Green Version]
  20. Kovačević, M.; Ivanišević, N.; Dašić, T.; Marković, L. Application of artificial neural networks for hydrological modelling in Karst. Gradjevinar 2018, 70, 194327. [Google Scholar] [CrossRef] [Green Version]
  21. Hanák, T.; Vítková, E. Causes and effects of contract management problems: Case study of road construction. Front. Built Environ. 2022, 8, 1009944. [Google Scholar] [CrossRef]
  22. Tijanić, K.; Car-Pušić, D.; Šperac, M. Cost estimation in road construction using artificial neural network. Neural. Comput. Appl. 2020, 32, 9343–9355. [Google Scholar] [CrossRef]
  23. Kovačević, M.; Lozančić, S.; Nyarko, E.K.; Hadzima-Nyarko, M. Modeling of Compressive Strength of Self-Compacting Rubberized Concrete Using Machine Learning. Materials 2021, 14, 4346. [Google Scholar] [CrossRef]
  24. Kovačević, M.; Lozančić, S.; Nyarko, E.K.; Hadzima-Nyarko, M. Application of Artificial Intelligence Methods for Predicting the Compressive Strength of Self-Compacting Concrete with Class F Fly Ash. Materials 2022, 15, 4191. [Google Scholar] [CrossRef]
  25. Kingston, G.B. Bayesian Artificial Neural Networks in Water Resources Engineering. Ph.D. Thesis, School of Civil and Environmental Engineering, Faculty of Engineering, Computer and Mathematical Science, University of Adelaide, Adelaide, Australia, 2006. [Google Scholar]
  26. Hastie, T.; Tibsirani, R.; Friedman, J. The Elements of Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  27. Breiman, L.; Friedman, H.; Olsen, R.; Stone, C.J. Classification and Regression Trees; Chapman and Hall/CRC: Wadsworth, OH, USA, 1984. [Google Scholar]
  28. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  29. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  30. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
  31. Elith, J.; Leathwick, J.R.; Hastie, T. A working guide to boosted regression trees. J. Anim. Ecol. 2008, 77, 802–813. [Google Scholar] [CrossRef] [PubMed]
  32. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  33. Kecman, V. Machine Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  34. Vapnik, V. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  35. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  36. Schölkop, B.; Smola, A.; Williamson, R.; Bartlett, P.L. New support vector algorithms. Neural Comput. 2000, 12, 1207–1245. [Google Scholar] [CrossRef]
  37. Schölkopf, B.; Platt, J.; Shawe-Taylor, J.; Smola, A.J.; Williamson, R.C. Estimating the support of a high-dimensional distribution. Neural Comput. 2001, 13, 1443–1471. [Google Scholar] [CrossRef]
  38. LIBSVM—A Library for Support Vector Machines. Available online: https://www.csie.ntu.edu.tw/~cjlin/libsvm/ (accessed on 10 February 2022).
  39. Rasmussen, C.E.; Williams, C.K. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
Figure 1. Schematic representation of a multi-layer perceptron neural network [20].
Figure 1. Schematic representation of a multi-layer perceptron neural network [20].
Axioms 12 00019 g001
Figure 2. Schematic representation of the forecast model based on a regression tree [23].
Figure 2. Schematic representation of the forecast model based on a regression tree [23].
Axioms 12 00019 g002
Figure 3. Creation of an ensemble composed of individual regression trees [2].
Figure 3. Creation of an ensemble composed of individual regression trees [2].
Axioms 12 00019 g003
Figure 4. Creation of boosted trees ensemble [2].
Figure 4. Creation of boosted trees ensemble [2].
Axioms 12 00019 g004
Figure 5. The support vector machine method with the so-called Vapnik’s error function [2].
Figure 5. The support vector machine method with the so-called Vapnik’s error function [2].
Axioms 12 00019 g005
Figure 6. The eastern and southern branches of Corridor X [2].
Figure 6. The eastern and southern branches of Corridor X [2].
Axioms 12 00019 g006
Figure 7. Correlation matrix with histograms of variables.
Figure 7. Correlation matrix with histograms of variables.
Axioms 12 00019 g007
Figure 8. Systemic approach to model analysis.
Figure 8. Systemic approach to model analysis.
Axioms 12 00019 g008
Figure 9. Accuracy examination of MLP neural network models in accordance with established accuracy criteria: (a) RMSE and MAE; and (b) R and MAPE.
Figure 9. Accuracy examination of MLP neural network models in accordance with established accuracy criteria: (a) RMSE and MAE; and (b) R and MAPE.
Axioms 12 00019 g009
Figure 10. Accuracy analysis of ensembles based on regression trees: (a) RMSE, (b) MAE, (c) R, and (d) MAPE.
Figure 10. Accuracy analysis of ensembles based on regression trees: (a) RMSE, (b) MAE, (c) R, and (d) MAPE.
Axioms 12 00019 g010
Figure 11. The influence of the parameters of the boosted trees model on the value of the MSE criterion.
Figure 11. The influence of the parameters of the boosted trees model on the value of the MSE criterion.
Axioms 12 00019 g011
Figure 12. Values of the distance scale parameters in the model with the ARD exponential covariance function.
Figure 12. Values of the distance scale parameters in the model with the ARD exponential covariance function.
Axioms 12 00019 g012
Figure 13. Comparison of the target values with the model values.
Figure 13. Comparison of the target values with the model values.
Axioms 12 00019 g013
Figure 14. The regression plot of the predicted and target values of the total amount of concrete.
Figure 14. The regression plot of the predicted and target values of the total amount of concrete.
Axioms 12 00019 g014
Table 1. Mean, minimum, and maximum values of variables in model.
Table 1. Mean, minimum, and maximum values of variables in model.
Variable Average
Value
Minimum
Value
Maximum
Value
Standard
Deviation
Average bridge span (m)21.256.5249.009.7751
Total bridge span length (m)84.246.52628.74115.0559
Bridge width (m)13.437.9019.912.0129
Average pier height (m)9.603.2835.015.1100
Total volume of concrete (m3)1700.19148.0013,444.002302.4194
Table 2. Parameters of GPR model covariance functions.
Table 2. Parameters of GPR model covariance functions.
GP Model Covariance FunctionCovariance Function Parameters
Exponential k x i , x j | Θ = σ f 2 e x p 1 2 r   σ l 2
σ l = 239.8767 σ f =   8109.7824
Squared exponential k x i , x j | Θ = σ f 2 e x p 1 2 x i x j T x i x j   σ l 2
σ l =   1.3436 σ f =   2415.2362
Matern 3/2 k x i , x j | Θ = σ f 2 1 + 3 r σ l e x p 3 r σ l
σ l = 3.9954 σ f = 3206.8049
Matern 5/2 k x i , x j | Θ = σ f 2 1 + 5 r σ l + 5 r 2 3 σ l 2 e x p 5 r σ l
σ l =   2.8760 σ f = 2638.9008
Rational quadratic k x i , x j | Θ = σ f 2 1 + r 2 2 a σ l 2 α
σ l =   5.4745 a = 0.0291 σ f =   6829.7688
where r = x i x j T x i x j .
Table 3. Parameters of GPR ARD model covariance functions.
Table 3. Parameters of GPR ARD model covariance functions.
Covariance Function Parameters
σ 1 σ 2 σ 3 σ 4 σ 5 σ 6
ARD Exponential:
k x i , x j | Θ = σ f 2 exp r ; σ F = 6848.2102; r = m = 1 d x i m x j m 2   σ m 2
1304.230047.65191205.8801919,040.1184192.4708441.1901
ARD Squared exponential:
k x i , x j | Θ = σ f 2 e x p 1 2 m = 1 d x i m x j m 2   σ m 2 ;   σ f = 2955.5719
12.45240.837812.66192.98040.96872.6773
ARD Matern 3/2:
k x i , x j | Θ = σ f 2 1 + 3 r e x p 3 r ;   σ f =3727.8021
14.49372.460724.795116.81313.112710.3845
ARD Matern 5/2:
k x i , x j | Θ = σ f 2 1 + 5 r + 5 r 2 3 e x p 5 r ;   σ f =26.5499
12.40041.532317.03036.45111.60695.5196
ARD Rational quadratic:
k x i , x j | Θ = σ f 2 1 + 1 2 α m = 1 d x i m x j m 2   σ m 2 α ; α = 0.0448 ;   σ f = 6385.5798
14.49282.697827.344621.06163.284310.7307
where r = m = 1 d x i m x j m 2   σ m 2 .
Table 4. Accuracy of analyzed machine learning models in accordance with the defined criteria.
Table 4. Accuracy of analyzed machine learning models in accordance with the defined criteria.
ModelRMSEMAERMAPE/100
NN 6-3-1531.9820320.84580.97080.2744
NN 6-9-1488.2587327.16540.97400.3540
TreeBagger453.2286264.09770.97040.1833
Random forest419.7320254.86270.97140.1886
Boosted trees482.9367273.77900.95680.2010
SVR-Lin.kernel519.1121334.77100.97410.3538
SVR-RBF kernel410.1248230.52530.98400.3052
SVR-Sig. kernel518.5660336.85040.97420.3579
Exponential333.0896164.29910.98950.1452
ARD-Exponential325.7768159.27960.98990.1165
Squared exponential411.0802199.86750.94800.2319
ARD-Sq. exponential348.2953184.56090.98840.1574
Matern 3/2352.1780167.27270.98820.1621
ARD-Matern 3/2339.9148160.28370.98900.1320
Matern 5/2367.9996171.69960.98710.1784
ARD-Matern 5/2346.0292169.35060.98860.1420
Rational quadratic342.7649160.85120.98890.1568
ARD rational quadratic339.5615167.41380.98900.1411
Table 5. Comparative analysis of concrete consumption models with different sets of input variables.
Table 5. Comparative analysis of concrete consumption models with different sets of input variables.
Model x 1 x 2 x 3 x 4 x 5 x 6 RMSEMAERMAPE
1.111011321.3378159.27090.990211.8484
2.111111325.7768159.27960.989911.6461
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kovačević, M.; Ivanišević, N.; Stević, D.; Marković, L.M.; Bulajić, B.; Marković, L.; Gvozdović, N. Decision-Support System for Estimating Resource Consumption in Bridge Construction Based on Machine Learning. Axioms 2023, 12, 19. https://doi.org/10.3390/axioms12010019

AMA Style

Kovačević M, Ivanišević N, Stević D, Marković LM, Bulajić B, Marković L, Gvozdović N. Decision-Support System for Estimating Resource Consumption in Bridge Construction Based on Machine Learning. Axioms. 2023; 12(1):19. https://doi.org/10.3390/axioms12010019

Chicago/Turabian Style

Kovačević, Miljan, Nenad Ivanišević, Dragan Stević, Ljiljana Milić Marković, Borko Bulajić, Ljubo Marković, and Nikola Gvozdović. 2023. "Decision-Support System for Estimating Resource Consumption in Bridge Construction Based on Machine Learning" Axioms 12, no. 1: 19. https://doi.org/10.3390/axioms12010019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop