Next Article in Journal
Nonlinear Finite Element Analysis Formulation for Shear in Reinforced Concrete Beams
Previous Article in Journal
On the Determination of Meshed Distribution Networks Operational Points after Reinforcement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stiffness Modulus and Marshall Parameters of Hot Mix Asphalts: Laboratory Data Modeling by Artificial Neural Networks Characterized by Cross-Validation

1
Polytechnic Department of Engineering and Architecture, University of Udine, Via del Cotonificio 114, 33100 Udine, Italy
2
Department of Civil Engineering, Aristotle University of Thessaloniki, University Campus, 54124 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(17), 3502; https://doi.org/10.3390/app9173502
Submission received: 31 July 2019 / Revised: 20 August 2019 / Accepted: 21 August 2019 / Published: 25 August 2019
(This article belongs to the Section Materials Science and Engineering)

Abstract

:
The present paper discusses the analysis and modeling of laboratory data regarding the mechanical characterization of hot mix asphalt (HMA) mixtures for road pavements, by means of artificial neural networks (ANNs). The HMAs investigated were produced using aggregate and bitumen of different types. Stiffness modulus (ITSM) and Marshall stability (MS) and quotient (MQ) were assumed as mechanical parameters to analyze and predict. The ANN modeling approach was characterized by multiple layers, the k-fold cross validation (CV) method, and the positive linear transfer function. The effectiveness of such an approach was verified in terms of the coefficients of correlation ( R ) and mean square errors; in particular, R values were within the range 0.965 0.919 in the training phase and 0.881 0.834 in the CV testing phase, depending on the predicted parameters.

1. Introduction

Road pavements are transport infrastructures built with different types of hot mix asphalt (HMA), namely mixtures made of aggregates and bitumen, mixed at temperatures higher than 150 °C. In order to withstand traffic loads and environmental conditions, such infrastructures have to be properly designed in terms of their thickness and material properties. With respect to the composition and mechanical characteristics of HMAs, experimental methods are currently used to design and optimize such bituminous mixtures [1,2,3,4,5,6]. Particularly, for what concerns the bitumen content identification and the mixtures’ performance evaluation, quite onerous laboratory tests are necessary; moreover, such experimental procedures require experienced and skilled technicians. Any modification of the HMAs composition, in terms of type or quantity of bitumen, rather than of aggregates, requires new laboratory tests. A numerical model of HMAs’ mechanical behavior that could quickly elaborate a reliable prediction of the material’s response would allow a reduction in time and costs for the design of the mixture itself.
Typically, a mathematical model of the material’s behavior is based on constitutive equations, expressed in analytical terms [7,8], eventually implemented in a computational system for finite element analysis [9,10].
One of the milestone studies on the modeling of the mechanical behavior of bituminous mixtures aimed to investigate the elastic–viscoelastic correspondence principle, which allowed the elastic continuum damage model to extend to a corresponding viscoelastic damage model, characterized by an evolution law able to properly describe the growth of damage within asphalt concrete, under uniaxial tension tests at different strain rates [11]. The following developments of such a constitutive approach have included cyclic loading conditions [12,13] and an analysis of the healing phenomenon [14]. On the basis of the elastic–viscoelastic correspondence principle and continuum damage mechanics, the constitutive framework has been further elaborated to obtain a fatigue prediction model, which has also been compared to a classic phenomenological fatigue model [15,16,17]. It has been demonstrated that the regression coefficients of the phenomenological model can be expressed as functions of viscoelastic and damage characteristics of asphalt concretes. More recently, the modeling approach has been modified to consider the viscoplastic rate-dependent hardening–softening response of bituminous mixtures under compression [18,19,20,21]. A viscoelastoplastic formulation of the constitutive approach has also been developed [22,23].
Linear viscoelastic, rather than thermo-viscoplastic laws have also been studied by Di Benedetto et al. [24,25] to model the mechanical response of bituminous mixtures.
A fundamental constitutive approach proposed by Masad et al. [26] to model the permanent deformation of asphalt concretes at high temperatures was characterized by an anisotropic non-associated flow rule, on the basis of the Drucker–Prager yield surface. Laboratory data obtained by triaxial tests, carried out at different confining stress levels and strain rates, were used to validate the proposed model, with respect to three different asphalt concretes. Another relevant aspect of such an approach is the relationship that has been established between the model parameters and some significant aggregate properties (shape, surface energy, anisotropic distribution), particularly with regard to the optimization of the mixture.
The research group of the Texas A & M University, after a series of studies [27,28,29,30], achieved significant advances, leading up to a temperature-dependent viscodamage model coupled to the Schapery’s nonlinear viscoelasticity and the Perzyna’s viscoplasticity [31]. Such a model has allowed the ability to distinguish between the compressive and tensile response of asphalt concretes with respect to the damage development phenomenon. Advanced numerical algorithms were used to implement the proposed model in a general-purpose finite element code. An effective method to obtain the model parameters through the creep test carried out at various stress levels and temperatures was also developed.
Further improvements have been accomplished over the years with more refined models based on the mechanics of materials [32,33,34,35,36,37,38,39,40] to acquire a deeper understanding about the complex relationship between asphalt mixtures’ microstructure and their mechanical behavior. Particular emphasis has been given to the possibility of using such constitutive approaches to properly select asphalt concretes’ components and their optimal proportions to ensure the required design properties and performance of the mixture. Furthermore, advanced material characterization methods have also been used to support such modeling approaches, for instance, techniques based on X-ray computed tomography image analysis.
A coupled constitutive model, still based on Schapery nonlinear viscoelasticity and Perzyna-type viscoplasticity, but properly modified to take into account the hardening–relaxation phenomena, represents the current state of the art achieved by such a research group [41]. One of the key characteristics of the last approach is given by the possibility to calibrate and validate the model on the basis of conventional tests, namely dynamic modulus tests and repeated creep-recovery tests. The model has shown the capacity to properly describe the complex behavior of asphalt concretes under different loading conditions.
Other studies, based on continuum thermomechanics, have allowed constitutive equations suitable for modelling the mechanical response of asphalt concretes to be obtained. Krishnan et al. [42] developed a nonlinear rate type viscoelastic model, by means of thermodynamic principles. A coupled temperature-dependent viscoelastic, viscoplastic, and viscodamage model was obtained by Darabi et al. [43], on the basis of a thermodynamic framework.
Introducing the Helmholtz free energy and the concept of internal state variables, a viscoelastic–viscoplastic damage model, congruent from the thermodynamics point of view, was developed by Zhu and Sun [44]. A deep discussion on the capacity of the model to reliably predict the volumetric deformation was provided; the contraction and dilation response of the material was successfully associated to the viscoelastic component and viscoplastic damage component, respectively.
Within the ambit of thermomechanics, Chen et al. [45] recently derived a constitutive formulation in finite strains and characterized by a viscoelastic dissipation potential to take into account the deviatoric and volumetric response, a modified Perzyna viscoplastic law with a non-associated flow rule, to model the inelastic deformation by means of a Drucker–Prager type plastic dissipation potential and a damage model, specifically introduced to represent the different behavior of the material in tension and compression.
The environmental as well as the physical and chemical behaviors of the bituminous mixtures, related to ageing, healing, or moisture-induced damage, have also been introduced in some constitutive formulations [46,47,48,49].
Other mathematical and numerical methods are based on fractional models [50,51,52,53], rather than the distinct element method [54,55,56,57].
The brief discussion just presented represents a partial overview of the total amount of studies conducted by different research groups on the modeling of asphalt concretes’ behavior, based on the mechanics of materials. Nevertheless, it can be stated that the main and very important characteristic of such methods is the mechanistic framework of the numerical or mathematical expressions, which allows a rational analysis and deep understanding of the bituminous mixtures’ response under different testing conditions. For this reason, the mechanistic constitutive methods also represent a really useful tool to support the mix design phase.
Alternative approaches rely on methods that are not physically based, for instance, statistical rather than artificial neural networks (ANNs) models.
Statistical regressions of experimental data sets could be an alternative approach to develop predictive equations of the HMA’s property analyzed [58,59,60,61]. Nevertheless, it has also been reported that ANNs can produce more accurate predictions with respect to multiple linear regression [62,63]. Indeed, in recent years, ANNs have arisen in scientific research as a powerful numerical technique for data analysis [64], even in the pavement-engineering field. Tapkın et al. [65] and Baldo et al. [66] verified the possibility of fitting Marshall test results of bituminous mixes to provide predictive equations that can be used to quicken the empirical Marshall mix design; nevertheless, such neural models are not mechanistically based and for this reason, they are considered as “black box” methods. However, the so called “black box” principle is basically based on a nonlinear fitting approach, as it has previously been widely discussed [66]. Furthermore, even without a physical basis, such approaches can be useful to reduce the number of laboratory tests required by the Marshall mix design, which is still widely adopted in many asphalt laboratories [67,68,69,70,71].
ANNs have also been used in other studies to model different mechanical parameters of bituminous mixtures for road pavement; such researches have reported the use of a basic network architecture, a standard approach for the data sampling and adoption of just one type of transfer function, namely the hyperbolic tangent one [65,72,73,74,75,76,77]. This type of methodological approach has been followed primarily because the data sets were large enough to avoid the use of more complex neural network structure; however, it has been mentioned that more sophisticated ANNs could be useful to improve the prediction performance of neural models [64].
Therefore, the main objective of the current paper was to verify the effectiveness of an ANN modeling approach based on multiple layer structures, a more reliable data sampling technique and a more efficient transfer function, for accurate and fast prediction of mechanical parameters of bituminous materials to enhance the mix design phase in the laboratory. The ANNs models developed in this study were trained and tested on the basis of laboratory data of HMAs for road pavements, characterized by different types of aggregate and bitumen.

2. Theory and Calculation

2.1. Artificial Neural Network Modeling

An artificial neural network is a computational approach that is increasingly used in the development of predictive models; its fundamental unit is the artificial neuron [78,79]. The function of an artificial neuron, like a biological one, is to process input signals and modulate its own response through an activation function, called the transfer function, which determines the interruption or transmission of the outgoing impulse. The computing power of a biological brain depends on the number of connections between neurons and the same goes for ANNs. In a feedforward ANN, neurons are organized into layers and since information flows only in one direction without any recurrent cycle, connections are established between neurons belonging to different layers so that none of the possible pathways touch the same neuron twice. Within an ANN, each neuron ( j ) computes a weighted sum ( n j ) of elements ( p i ) in the input vector through weights ( w j , i ) associated with each connection. Then, it calculates an output value ( a ) by applying an assigned transfer function to n j [80]:
a j = f ( n j ) = f ( i = 1 R w j , i p i + b ) ,
where R is the number of elements in the input vector and b is the neuron bias. The transfer function may take different analytical expressions. In the current study, the positive linear (Equation (2)) and hyperbolic tangent (Equation (3)) functions were used:
f ( n j ) = { n j ,     n j > 0 0 ,     n j 0
f ( n j ) = e n j e n j e n j + e n j = 2 1 + e 2 n j 1 .
Connections’ weights and neuron bias are adjustable scalar parameters and their values determine the network function. The central idea of neural networks is to adjust such parameters, through an iterative process (called “training” or “learning”), so that the ANN is able to perform a desired task [78,79,81]. Given an experimental data set, the learning of a feedforward network is based on a supervised approach: It consists of iteratively adjusting weights’ values to minimize the difference between experimental targets and calculated output by using an optimization algorithm [78,79,81]. The supervised learning is divided into two phases (Figure 1): After the initialization of connection’s weights, the forward pass begins, which consists of processing the incoming information through the parallel processing of many neurons to compute net outputs. Then, the backward pass carries out a comparison between the experimental targets and calculated outputs so that a backpropagation algorithm can evaluate a weight’s correction. With such an approach, the artificial network learns to recognize the implicit relationship between the input and target and can provide a solution to new input data. A detailed description of the structure of feedforward networks, the computational process performed by the neuron, and the supervised learning has previously been discussed [66].

2.2. The K-Fold Cross Validation

In machine learning, the available data set is usually divided into subsets for training, validation, and testing. After the model’s hyperparameters have been set up (for example, the number of hidden layers, number of neurons in each layer, learning rate, etc.), the training data set, generally made up of 60% to 80% of the data, is used to train the model itself. If the model performs satisfactorily on the training data, then it is run on the validation data set, which usually ranges from 5% to 20% of the data. The validation data set helps to provide an assessment of the model’s capability. If the error (i.e., the mean squared difference between the predicted and experimental data, M S E ) on the validation data set significantly increases (or decreases) compared to the training error, then an overfitting (rather than an underfitting) condition has occurred. The test data set, typically consisting of 5% to 20% of the data, is employed to complete the analysis and assessment of the model prediction capacity.
The procedure described above, considered as “conventional”, poses two issues: On the one hand, the subdivision of the data into subsets with different purposes involves the risk of leaving out some relevant trends from the training data set and, on the other hand, a model can lead to a lower prediction error on training data but at the same time to high test error due to data sample variability [82].
An alternative to the conventional approach is the so-called k-fold cross validation (CV). It is a statistical technique that allows a more accurate estimation of a model’s performance and then to compare and select a model for a given predictive modeling problem [83,84,85,86]. It is strongly recommended to follow such a procedure in the case of a relatively small data set [78,81]. It involves a random division of data into k folds of equal sizes: One of these folds is used as a test set to evaluate the model’s performance and others (k−1) are grouped to form the training data set (Figure 2). The training and test phases are performed k times and each time a fold is assumed as a test set while the remaining ones are used to train the model. Performing multiple rounds of CV with different subsets from the same data allows a significant reduction in sample variability: All data are included in a training set (k−1) times, and in a test set once. Usually, the value of k is set to 5 or 10, as these values have been shown to result in more accurate estimations of a model’s capacity [82,86]. By calculating the average of the testing error, M S E i , over these multiple rounds, an estimation of the model’s predictive performance is obtained:
M S E c v = 1 k i = 1 k M S E i = 1 k i = 1 k ( 1 M l = 1 M ( t l y l ) 2 ) i ,
where M is the number of samples in the i-th fold, t l is the experimental targets, and y l is the ANN-calculated outputs. Including a measure of the variance of the skill scores is also recommended [82,83], such as the standard deviation (i.e., the standard deviation is the square root of the variance) of M S E i :
σ c v = 1 k 1 i = 1 k | M S E i M S E c v | 2 .
However, to find the hyperparameters of the model that best fit the experimental data and that satisfactorily generalize the problem, it is necessary to define a model selection procedure. This consists of designing several neural networks by assigning numerical values, within a range, to one of the hyperparameters and by fixing those of the remaining ones. This operation is repeated by varying at least one pair of hyperparameters. Resulting models have to adapt well to the data used for training. Subsequently, a k-fold CV is performed on each of the models thus defined. The network that shows the best predictive performance is initialized and then trained on the entire data set available; this is the final model that is used for predictions.

2.3. ANN Structure

In the current study, ANN models of the selected mechanical parameters of the bituminous mixtures investigated were developed using the software MATLAB® (R2014b, The MathWorks Inc., academic use). The designed ANNs are of the feedforward type and are characterized by multiple layers, which make use of the positive linear rather than the hyperbolic tangent transfer function. The structure of all the ANNs is completed by an output layer, adopting a linear transfer function. The use of the positive linear transfer function (also known as ReLu, rectified linear unit [64,87]) has been shown empirically that it leads to better performance than the use of the hyperbolic tangent, which is prone to problems of rapid saturation [64,87]. Each hidden layer has the same number of neurons. The backpropagation algorithm, used to train the networks and improve their generalization, is the Bayesian regularization, which updates the weight values according to the Levenberg–Marquardt optimization [88].
The network was set up with random initial weights and biases, whereas the learning rate (i.e., the parameter that controls changes of weight and bias during learning) was 0.001, the number of epochs (i.e., is the maximum number of presentations of the set of training vectors to a network and updates of weights and errors) was 3000, the error goal was 0.01, and the minimum gradient (i.e., the magnitude of the gradient of performance used to stop the training) was 0.0000001. In order to train networks more efficiently, a pre-processing function was assigned to normalize the data set, so that the input variables and the target parameter had a mean equal to zero and standard deviation equal to one. The same function, as a post-processing function, also reconverts normalized outputs into the targets’ original units. The following equation illustrates the normalization process performed on the training and test data sets:
x n = x x m e a n x s t d ,
where x is the i-th component of the input vector or one of the three mechanical parameters considered in the current study; x m e a n and x s t d are respectively the average and the standard deviation of the parameter, x ; and finally, x n is the normalized input or target variable treated (Table S1). Each new data that will be used by the network has to be pre-processed by the mean of Equation (6).
The number of hidden layers and the number of neurons in each hidden layer were selected as hyperparameters to be optimized in the model selection procedure to search for the best predictive capabilities on the basis of the selected machine learning method (i.e., the Bayesian regularization).

2.3.1. ANN-Model’s Equation

Referring to the computational process performed by neurons (Section 2.1) and to the architecture of multiple-layer ANNs, the vector equation of a one-hidden layer model (Equation (7)), two-hidden layer model (Equation (8)), and three-hidden layer model (Equation (9)) are reported in the following:
y = f 2 ( L W 2 , 1 · f 1 ( I W 1 , 1 · p + b 1 ) + b 2 ) ,
y = f 3 ( L W 3 , 2 · f 2 ( L W 2 , 1 · f 1 ( I W 1 , 1 · p + b 1 ) + b 2 ) + b 3 ) ,
y = f 4 ( L W 4 , 3 · f 3 ( L W 3 , 2 · f 2 ( L W 2 , 1 · f 1 ( I W 1 , 1 · p + b 1 ) + b 2 ) + b 3 ) + b 4 ) ,
where p is the R length input vector and y is the network scalar output associated with the considered mechanical parameter. All ANN models were elaborated on the basis of four numerical input variables, namely the bitumen content (% by weight of mix), filler to bitumen ratio (%), air voids (%), and voids in the mineral aggregates (%), and two categorical input variables, namely the bitumen type and aggregate type. Such input data have been considered as effective in properly representing the composition of HMA mixes, which were designed and produced with different aggregate (limestone or diabase) and bitumen (standard 50/70 or modified 25-55/75) types. Predictive models characterized by such types of input data allow modifications of the mixture composition to be numerically simulated, obtaining an estimation of the mechanical parameter under analysis without the necessity of performing further experimental tests, thus representing an extremely useful design tool during the optimization of the material.
The parameters’ meaning of Equation (8) is explained in the following, while a graphical representation is reported in Figure 3. Vector and scalar variables are indicated in bold and italics type, respectively. Each layer is characterized by a connections’ weight matrix, W , adds a vector, b , of neurons’ bias, and returns an output vector, a . To distinguish one layer from another, the number of the layer is added to the variable of interest; furthermore, a distinction between the weight matrix that is connected to inputs and ones that are connected between layers is made by associating letters, I and L , to W , respectively. A pair of indexes is also assigned to each weight matrix to identify the destination (first index) and the origin (second index) of represented connections. Thus, in a three-layer feedforward network (Figure 3), the S 1 neurons in the first hidden layer add an S 1 × 1 bias vector, b 1 , to the dot product of I W 1 , 1 (i.e., the S 1 × R weight matrix of connections between R inputs and S 1 neurons) and the input vector, p . The sum of the bias, b 1 , and the product, I W 1 , 1 · p , is the so-called net input, n 1 , with the dimension, S 1 × 1 . This sum is processed by the transfer function, f 1 , to get the neurons’ S 1 × 1 output vector, a 1 , of the first hidden layer, which is also the input vector of the second hidden layer. This layer, made of S 2 neurons, elaborates the incoming information by the S 2 × 1 bias vector, b 2 , the S 2 × S 1 layer weight matrix, L W 2 , 1 (having a source 1, neurons’ outputs of the first layer, and a destination 2, neurons of the second layer), and the transfer function, f 2 , to produce the S 2 x 1 output vector, a 2 . This process is repeated for all the hidden layers that belong to the architecture of the ANN. The last layer, which produces the network output, is called the output layer; function-fitting neural networks often have just one output neuron, because most of the time the prediction needed regards only one target value associated with a specific mechanical or physical parameter. Thus, the sum of the scalar bias, b 3 , and the product, L W 3 , 2 · a 2 , is elaborated by the transfer function, f 3 , typically of the linear type, to obtain the scalar network output, labelled as y . The concepts discussed above can also be applied to the other equations’ variables.

2.4. Model Selection Procedure and Error Estimation

Quantities and functions illustrated in Section 2.3 represent the networks’ hyperparameters that are set in the model selection procedure in order to ensure better efficiency of the training process. As mentioned, the number of hidden layers ( N ) and the number of neurons in each hidden layer ( S ) were modified to obtain 15 case studies for each selected transfer function (positive linear or hyperbolic tangent): N was changed from 1 to 3, S from 6 to 14, by increments of 2 neurons; such research design is reported in Table 1. In the literature, the use of a lower number of neurons and a greater number of hidden layers is recommended in order to increase the capacity of an ANN and avoid overfitting problems [72].
The hyperparameters’ selection is based on the evaluation of the models’ prediction capacity [83,89], which is obtained by means of statistical indicators. In fact, the aim of the model evaluation is to estimate the generalization error of the selected model, calculated on the basis of a defined test set. The term “generalization” means the ability of a model to return a good prediction on unseen data, i.e., that have not been used for training. The most used statistical indicators, especially in the “conventional” procedure, are the mean squared error ( M S E ) and the correlation coefficient ( R ). The first one (whose analytical expression is reported in the last side of Equation (4)) calculates the average squared difference between the experimental targets ( t ) and ANN-computed outputs ( y ); it is also employed as performance function (also called the “loss” function) to optimize the ANN training process [78]. In general, the lower the mean square error, the closer the ANN-calculated data are to the experimental ones (i.e., calculated values are dispersed closely to experimental ones). Instead, the correlation coefficient, R , measures the linear relationship between t and y , and it is expressed by the ratio between the covariance of the two considered variables ( t and y ) and the product of the respective standard deviations:
R = C o v ( t , y ) σ t σ y = l ( t l t ¯ ) ( y l y ¯ ) l ( t l t ¯ ) 2 l ( y l y ¯ ) 2             l = 1 , ,   M ,
where t ¯ and y ¯ are the mean value of the targets and of network outputs, respectively. In general, the closer the value of R is to the unity, the stronger the results of the linear relation between t and y , thus confirming that the training has been completed successfully (if R 1 for the training data set) and that the degree of generalization achieved can be considered optimal (if R 1 for the testing data set).
The mean squared error and the correlation coefficient have already been used in previous performance analysis of some ANNs designed to predict the mechanical parameters of HMA mixtures [65,66,72,73,74,75,76,90].
On the contrary, when a k-fold CV is performed, the ANN performance evaluation takes place through three statistical indicators [82,83,84,85,86], namely the CV mean squared error ( M S E c v ), the MSEs’ standard deviation ( σ c v ), and the CV correlation coefficient ( R c v ), which is the average of the test correlation coefficient ( R i ) over k rounds. A fourth indicator is given by the mean difference ( Δ ) between the training and test MSEs over k-folds, which allows the occurrence of an overfitting problem to be assessed. In fact, when a model over-adapts to the training data set, the difference between training and test errors increases [82,85]. Therefore, the hyperparameters that allow the lowest values of M S E c v (i.e., higher R c v ), σ c v , and Δ to be obtained can be selected to identify the final model, which has to be trained on the entire data set available.

2.5. Standard Approach

Five ANNs with a different number of neurons were also developed for each mechanical parameter of interest using the MATLAB® ANN toolbox; these networks were trained with 80% of the available data set and tested on the remaining 20%. This configuration was chosen to match the data subdivision performed by the five-fold CV. The sampling process was completely random and carried out just one time by the standard approach implemented in the MATLAB® ANN toolbox. If such an initial subdivision of the data set, with the following training and testing of the model, gives an unsatisfactory performance, it is suggested, on the basis of the experience of the researcher, that the whole procedure is repeated with a new sampling of the data set; however there is no automatic procedure aimed at optimizing model performance. In order to evaluate the performance of the standard approach with respect to the sample variability, in this study, random sampling was repeated 10 times to obtain 10 random subdivisions of the available data set.
The k-fold cross validation involves partitioning the available data set into k non-overlapping groups of equal size that represent the partition’s classes of the initial data set [78,81]. Therefore, given that the performance of a model is evaluated on k test sets (which by definition cannot contain common elements), the average of test errors across k trials represents a more reliable estimation of the model’s predictive capacity [78,81]. Indeed, in the standard approach, the subdivision of the data set in the training and test subsets is randomly performed and by repeating this operation, the identified test sets can contain common elements, which leads to a noisy estimation of the predictive performance for small-size data sets [78,81]. Furthermore, the MATLAB® ANN toolbox carries out the data sampling as a “black box” and consequently it does not allow the performed subdivision of the data set to be easily identified; thus, the variability of the training and test set is difficult to establish.
Therefore, this study of the standard-ANN’s aimed to verify the wide variability of the models’ predictive capacity that could be caused by data sample variability.

Standard ANN Structure

The designed standard-ANNs are characterized by one hidden layer with a hyperbolic tangent transfer function, and one output layer with a linear transfer function; this architecture is set as the default in the MATLAB® ANN toolbox. The number of neurons in the hidden layer ranges from 6 to 14 by increments of 2 neurons; as a result, five different standard-ANNs were obtained for each mechanical parameter considered. The seven types of input data already mentioned above (Section 2.3.1) were considered. Moreover, all the hyperparameters fixed in the model selection procedure for k-fold analysis were set to the same values (Section 2.3). In the following, these ANNs are simply called “standard”.

3. Materials and Methods

The type of HMA mixture analyzed in the present research was dense asphalt concrete (AC). HMAs were produced with limestone or diabase aggregates and two types of bituminous binder; conventional or modified bitumen. The HMAs were designed within the framework of six different projects conducted in Greece. The mixtures were characterized by various bituminous binder percentages and aggregate grading curves. The preparation of the HMAs was conducted in the laboratory in order to support the mix design procedure as well as the stiffness evaluation of the design mixture.

3.1. Aggregates

The aggregates used in the current study were either limestone or diabase aggregates. The limestone aggregates came from the same quarry, while the diabase aggregates came from three different quarries; the results of the aggregates’ characterization, along with the test protocols adopted, are reported in Table 2.

3.2. Bitumen

As mentioned above, two types of bituminous binder were considered in the present research, a 50/70 conventional bitumen and a bitumen modified with styrene–butadiene–styrene polymers (SBS). The results of the fundamental tests carried out on the two bitumen types, along with the test protocols used, are presented in Table 3.

3.3. Hot Mix Asphalts (HMAs)

The hot mix asphalts used in the current study (dense asphalt concrete mixtures) had a maximum aggregate size equal to 20 mm (HMA20) in all cases. More specifically, the HMAs20 with 50/70 conventional bituminous binder, either with limestone or diabase aggregates, were coded as HMA20-5070-Lm or HMA20-5070-Db, respectively. Similarly, for the mixtures with SBS-modified bitumen, the following codes, HMA20-SBS-Lm and HMA20-SBS-Db, for limestone and diabase aggregates, respectively, were used. The specimens of all HMAs were compacted in the laboratory by means of an impact compactor (EN 12697-30). All the specimens were characterized by a diameter of 100 mm and an average thickness of 63.4 for the mixtures with the limestone aggregates and 63.7 mm for the mixtures with the diabase aggregates. In total, 69 specimens were prepared for the mixtures with limestone aggregates and 60 specimens were produced for the mixtures with diabase aggregates. Therefore, 129 specimens in total were used in the present research. The grading curves of the HMA20-5070-Lm and HMA20-SBS-Lm are reported in Figure 4. The grading curves of the HMA20-5070-Db and HMA20-SBS-Db are shown in Figure 5.
Tables S2–S5 report the specimens’ volumetric characteristics (EN 12697-8), Marshall stability (MS), and Marshall quotient (MQ) values (EN 12697-34) for each mixture investigated. The Marshall quotient was determined for each specimen, as the ratio between Marshall stability and Marshall flow. Despite having already been discussed as partial representativity of the Marshall results with regard to the bituminous mixtures’ response [91,92,93,94], such a test is currently used due to the huge experience gained all around the world.

3.4. Stiffness Modulus Evaluation

The stiffness modulus (ITSM) was measured, for all the specimens, according to EN 12697-26, Annex C (IT-CY), using the standard testing conditions: Temperature of 20 °C and a target deformation and rise time equal to 5 μm and 124 ms, respectively. In total, 129 specimens were evaluated for stiffness and the results are presented in Table S6 for HMAs with limestone and diabase aggregates, respectively.

4. Results and Discussion

4.1. Artificial Neural Networks Modelling Results

The experimental data set, used for the training and testing of ANNs, included 39 specimens for the HMA mixtures with 50/70 penetration bitumen and limestone aggregate, 30 for ones with modified bitumen (25–55/75 penetration) and limestone aggregate, and 30 for each of the mixtures with diabase aggregate and a different type of bitumen (i.e., 129 specimens overall). According to the relevant literature [78,81], for such a data set size, k-fold cross validation is recommended.
The data of all the mixtures were fitted together to obtain a unique predictive model for each considered mechanical parameter; such models can take into account the different composition of the HMAs in terms of the bitumen content and type, aggregate type, filler to bitumen ratio, and volumetric properties.

4.1.1. Cross-Validation Results

Regarding the k-fold CV, the value of k was set to 5 and consequently, by having 129 available experimental data about the physical and mechanical properties of the considered HMA mixtures, each training data set was randomly made up of 103 observations, whereas the test data set was made up of the remaining 26 observations.
The four indicators’ values for each of the case studies considered in the model selection procedure (Table 1) are shown in Table 4, Table 5 and Table 6, one for each mechanical parameter considered; the models with the best prediction performance are indicated in bold type.
Table 4, which reports the ITSM models’ results, shows that the hyperbolic tangent transfer function leads to higher values of M S E c v , σ c v , and Δ as the number of hidden layers increases. Conversely, R c v decreases for an increasing number of hidden layers (for example, 0.878 to 0.827 for 10 neurons passing from one to three hidden layers), as would be expected. This trend is verified whatever the number of neurons. Accordingly, hyperbolic-tangent ANNs exhibit an overfitting problem when the number of hidden layers is greater than one.
Instead, regarding the linear positive transfer function, a general trend cannot be observed for the considered indicators, with respect to the number of hidden layers as well as the neurons. Nevertheless, in the case of a single hidden layer, overall, the ANNs performance is poor compared to the cases of two or three hidden layers; it means that, with one hidden layer, the ANNs models show an underfitting problem (i.e., higher M S E c v and smaller Δ ). However, the optimal network structure should not be identified by analyzing each indicator individually. Instead, it should be selected by an overall assessment of the four indicators. Indeed, the model with 2 hidden layers and 12 neurons could potentially be considered the best one by focusing the attention only on the R or MSE values (0.891 and 145,174, respectively), but 3 hidden layers and 8 neurons ensure much lower σ c v and Δ values (28,282 vs. 39,205 and 34,142 vs. 74,872, respectively), with just a little worsening of the R and MSE results (0.881 and 158,193, respectively). Based on such a consideration, the ITSM’s best prediction capacity is achieved by a three hidden layers feedforward network, which makes use of the positive linear transfer function and eight neurons in each hidden layer (Figure 6).
Similar considerations can be outlined for MS models, according to the results reported in Table 5. Therefore, the MS final model is a three hidden layers network characterized by a positive linear transfer function and eight neurons in each hidden layer (Figure 6).
Table 6 shows that the best results for the MQ parameter are obtained by one hidden layer networks, particularly by means of the positive linear transfer function and 12 neurons (Figure 7). However, less marked differences can be observed between the performance indicators of the MQ models, with respect to the transfer function type, being equal to the number of neurons and hidden layers.
In summary, for the MQ parameter, a multiple hidden layers structure is not needed; for the MS parameter, a slight performance improvement can be observed when using three hidden layers, while a substantial predictive capacity improvement can be appreciated by means of a more complex neural network structure for the stiffness modulus. Among the mechanical parameters considered, the stiffness modulus represents the most important one for a rational performance-based characterization of the materials [1,66,72,91]. Therefore, the use of multiple hidden layers architectures and the positive linear transfer function is justified for modeling stiffness laboratory data with regard to the HMAs considered.
In the Supplementary Materials, the weights and bias values (Tables S8–S16), as well as normalization factors, are reported for each of the ANN models proposed to give the possibility of other researchers reproducing the ANNs topologies developed in this study. Using Equations (7) and (9) along with their associated weights bias values, as well as normalization factors, it will be possible for other researchers to determine predictions of stiffness modulus and Marshall parameters, whatever the composition of the mixture, within the range of composition parameter values considered, without any other additional effort with respect to the standard ANN approach.
The highest accuracy ( R c v = 0.881 ) was obtained from the prediction of the ITSM, while R c v values greater than 0.830 were achieved for the MS ( R c v = 0.834 ) and MQ ( R c v = 0.859 ). These results can be considered satisfactory from an engineering point of view, taking into account the relatively low number of available specimens and the different characteristics of the HMAs considered.
The results of training phases of the ANN final models are summarized in Figure 8, Figure 9 and Figure 10; as can be observed, a good matching is achieved for the stiffness modulus ( R = 0.965 , M S E = 50238 ), Marshall stability ( R = 0.919 , M S E = 0.426 ), and quotient ( R = 0.919 , M S E = 0.187 ).
Comparisons between the experimental targets and ANN-calculated outputs for each mechanical parameter considered are reported in Table S7. The difference between the predicted and experimental data, expressed as a percentage value of the experimental data, is also reported (PD, percentage difference value). Both over-estimations and under-estimations can be observed for all the mechanical parameters considered. For the stiffness modulus, the PD absolute value is within the range 0.01 % 20.49 % , with an average value equal to 3.44 % , while the average values for the MS and MQ are equal to 4.14 % and 9.07 % , respectively.
Therefore, it can be concluded that cross validation is a useful approach to accurately evaluate a model’s prediction capacity and, consequently, to identify the network’s hyperparameters that lead to the highest possible accuracy for the predictive modeling problem of HMAs’ mechanical parameters.

4.1.2. Standard Approach Results

Concerning the standard ANNs, 103 data (80% of the data set) were used for the training of networks and the remaining 26 (20%) for the test.
Table 7, Table 8 and Table 9 show the “best” and the “worst” (over the 10 identified random partitions of the data set) prediction performance of the standard ANNs with a different number of neurons in the hidden layer on the three mechanical parameters considered, namely the stiffness modulus, Marshall stability, and quotient. The standard deviation of the test M S E , included in Table 7, Table 8 and Table 9, represents a measure of the variance of the performance scores over the 10 groups.
As can be observed, on the basis of the standard approach, the network performance can vary significantly depending on the test set considered, whatever the number of neurons in the hidden layer; for example, in terms of the stiffness modulus, with regards to 10 neurons, test MSE values are within the range 88,046 550,323 , with a standard deviation equal to 138,381 (Table 7). On the basis of the 10 different data samplings carried out, for each considered number of neurons, models characterized by the “worst” performance showed large differences between training and test MSE values; for example, with regards to the stiffness modulus, the MSE values for testing are an order of magnitude higher than the ones for training ( 550,323   vs .   47,722 for 10 neurons, Table 7). These results demonstrate that standard ANN models, on the basis of the “worst” random data partition, do not have the capacity to adequately generalize independently by the number of neurons. Therefore, in such cases, it is absolutely necessary to repeat the standard procedure, namely, random data sampling along with training and test phases, following a trial-and-error philosophy. In this way, it is possible to obtain good predictions (i.e., “best” cases), but these results depend on the data sets used for training and testing the model. Such data sets are randomly determined on the basis of a trial-and-error procedure that requires repetitions of the random partition; this means that a favorable data partition has to be looked for on the basis of a random repetition of the procedure, without any rational line guide. Hence, the standard approach, based on a few data sampling repetitions (up to 10 in this study), cannot be considered as the best suited method for ANN model optimization because the best results can be obtained only for comparable training and test sets variability.

5. Conclusions

In this study, an extensive ANN modelling activity was developed, with regards to the results of laboratory tests on HMA mixtures for road pavements. The experimental data set included 129 specimens, prepared with different aggregate and bitumen types. The main innovative feature of the research was the ANN model selection procedure, based on k-fold cross validation. In fact, to overcome the limitations of the standard approach of splitting the available data set into subsets with specific functions (i.e., for training and testing a model), a five-fold cross validation was used to compare the performance of 30 different network architectures and select the one that allows the best prediction accuracy of ITSM, MQ, and MS to be achieved, on the basis of four indicators. Five standard ANNs were also developed for each mechanical parameter of interest to demonstrate the wide variability of the models’ predictive capacity that can be observed as a consequence of data sample variability.
The main results can be summarized as follows:
  • The standard approach has proved to be unreliable as the accuracy obtained for one test set can change considerably for a different test set. Hence, basic use of the MATLAB® ANN toolbox cannot ensure an identification of the ANN model that best fits the experimental data, especially in the case of relatively small data sets.
  • The five-fold cross validation allowed the predictive capacity of the mechanical parameters’ models to be objectively estimated. In particular, the three-hidden layers ANN with eight neurons in each hidden layer and a positive linear transfer function was found to be the best performing model for predicting ITSM and MS. The development of a multiple hidden layers structure represents the second main innovative feature of the research. The MQ final model is a one-hidden layer network with a positive linear transfer function and 12 neurons. In summary, it was verified that the development of a multiple hidden layers structure could be useful, depending on the mechanical parameter investigated.
  • The use of the hyperbolic tangent transfer function, for multiple layer network architectures, has led to models that are characterized by overfitting problems.
  • It was verified that the positive linear transfer function is more effective with respect to the common hyperbolic tangent function; this is the third main innovative feature of the research, regarding the applications of ANNs in the pavement engineering field.
  • Analytical expressions of the proposed ANNs models, along with their associated weight values and normalization factors, were specified in detail to allow other researchers and engineers to compute a prediction of the parameters analyzed, modifying at their choice the composition of the mixture, within the range of variability of hot mix asphalt components considered in the study.
  • The good results achieved in the training and CV phases demonstrates that ANNs are an efficient data analysis system that is useful for the development of HMA’s prediction models, as they were able to satisfactorily identify the complex intrinsic relation between the mixtures’ properties and the analyzed mechanical parameters.
  • The ANNs models represent a typical “black box” method, which is not physically based; such a major drawback is compensated for by the possibility of obtaining satisfactory predictions of the mechanical parameters considered in a relatively easy way, within the ambit of materials and parameters considered.
  • The use of predictive models, elaborated by means of artificial neural networks, can be really useful during the design phase of hot mix asphalts because it is possible to obtain very quickly an accurate estimation of the analyzed mechanical parameters as a consequence of different input values regarding the mixture’s composition, so avoiding any other experimental tests.
  • The reliability and effectiveness of the outlined ANN approach deserve further assessment by increasing the number of specimens, the variability of the mixtures’ properties, and also considering different mechanical parameters, for instance, those associated to fatigue and permanent deformation resistance.

Supplementary Materials

The following are available online at https://www.mdpi.com/2076-3417/9/17/3502/s1, Table S1: Normalization parameters for the input and target variables, Table S2: Characteristics of HMA20-50/70-Lm specimens, Table S3: Characteristics of HMA20-SBS-Lm specimens, Table S4: Characteristics of HMA20-50/70-Db specimens, Table S5: Characteristics of HMA20-SBS-Db specimens, Table S6: Stiffness tests results for HMAs with limestone and diabase aggregates, Table S7: Comparison between experimental and ANN predicted data, Table S8: Component of the input weight matrix IW1,1 of the Stiffness Modulus final model, Table S9: Component of layers’ weight matrices LW2,1, LW3,2, LW4,3 of the Stiffness Modulus final model, Table S10: Component of the hidden layers’ bias vectors b1, b2, b3 and the output layer’s scalar bias b4 of the Stiffness Modulus final model, Table S11: Component of the input weight matrix IW1,1 of the Marshall Stability final model, Table S12: Component of layers’ weight matrices LW2,1, LW3,2, LW4,3 of the Marshall Stability final model, Table S13: Component of the hidden layers’ bias vectors b1, b2, b3 and the output layer’s scalar bias b4 of the Marshall Stability final model, Table S14: Component of the input weight matrix IW1,1 of the Marshall Quotient final model, Table S16: Component of the hidden layer’s bias vectors b1 and the output layer’s scalar bias b2 of the Marshall Quotient final model.

Author Contributions

Conceptualization, N.B.; Data curation, N.B., E.M. and M.M.; Formal analysis, N.B., E.M. and M.M.; Investigation, N.B., E.M. and M.M.; Methodology, N.B., E.M. and M.M.; Software, N.B. and M.M.; Supervision, N.B.; Writing—original draft, N.B., E.M. and M.M.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feiteira Dias, J.L.; Picado-Santos, L.G.; Capitão, S.D. Mechanical performance of dry process fine crumb rubber asphalt mixtures placed on the Portuguese road network. Constr. Build. Mater. 2014, 73, 247–254. [Google Scholar] [CrossRef]
  2. Liu, Q.T.; Wu, S.P. Effects of steel wool distribution on properties of porous asphalt concrete. Key Eng. Mater. 2014, 599, 150–154. [Google Scholar] [CrossRef]
  3. García, A.; Norambuena-Contreras, J.; Bueno, M.; Partl, M.N. Influence of steel wool fibers on the mechanical, thermal, and healing properties of dense asphalt concrete. J. Test. Eval. 2014, 42, 20130197. [Google Scholar] [CrossRef]
  4. Pasandín, A.R.; Pérez, I. Overview of bituminous mixtures made with recycled concrete aggregates. Constr. Build. Mater. 2015, 74, 151–161. [Google Scholar] [CrossRef]
  5. Zaumanis, M.; Mallick, R.B.; Frank, R. 100% hot mix asphalt recycling: Challenges and benefits. Transp. Res. Procedia 2016, 14, 3493–3502. [Google Scholar] [CrossRef]
  6. Wang, L.; Gong, H.; Hou, Y.; Shu, X.; Huang, B. Advances in pavement materials, design, characterisation, and simulation. Road Mater. Pavement Des. 2017, 18, 1–11. [Google Scholar] [CrossRef]
  7. Giunta, M.; Pisano, A.A. One dimensional viscoelastoplastic constitutive model for asphalt concrete. Multidiscip. Model. Mater. Struct. 2006, 2, 247–264. [Google Scholar] [CrossRef]
  8. Pasetto, M.; Baldo, N. Numerical visco-elastoplastic constitutive modelization of creep recovery tests on hot mix asphalt. J. Traffic Transp. Eng. 2016, 3, 390–397. [Google Scholar] [CrossRef]
  9. Erkens, S.M.J.G.; Liu, X.; Scarpas, A. 3D finite element model for asphalt concrete response simulation. Int. J. Geomech. 2002, 2, 305–330. [Google Scholar] [CrossRef]
  10. Pasetto, M.; Baldo, N. Computational analysis of the creep behaviour of bituminous mixtures. Constr. Build. Mater. 2015, 94, 784–790. [Google Scholar] [CrossRef]
  11. Park, S.W.; Kim, Y.R.; Schapery, R.A. A viscoelastic continuum damage model and its application to uniaxial behavior of asphalt concrete. Mech. Mater. 1996, 24, 241–255. [Google Scholar] [CrossRef]
  12. Lee, H.J.; Kim, Y.R. Uniaxial constitutive model accounting for viscoelasticity and damage evolution under cyclic loading. In Proceedings of the 1996 11th Conference on Engineering Mechanics, Fort Lauderdale, FL, USA, 19–22 May 1996; Volume 2, pp. 693–696. [Google Scholar]
  13. Lee, H.J.; Kim, Y.R. Viscoelastic constitutive model for asphalt concrete under cyclic loading. J. Eng. Mech. 1998, 124, 32–39. [Google Scholar] [CrossRef]
  14. Lee, H.J.; Kim, Y.R. Viscoelastic continuum damage model of asphalt concrete with healing. J. Eng. Mech. 1998, 124, 1224–1232. [Google Scholar] [CrossRef]
  15. Lee, H.J.; Daniel, J.S.; Kim, Y.R. Continuum damage mechanics-based fatigue model of asphalt concrete. J. Mater. Civ. Eng. 2000, 12, 105–112. [Google Scholar] [CrossRef]
  16. Underwood, B.S.; Kim, Y.R.; Guddati, M.N. Improved calculation method of damage parameter in viscoelastic continuum damage model. Int. J. Pavement Eng. 2010, 11, 459–476. [Google Scholar] [CrossRef]
  17. Hou, T.; Underwood, B.S.; Kim, Y.R. Fatigue performance prediction of North Carolina mixtures using the simplified viscoelastic continuum damage model. In Asphalt Paving Technology: Association of Asphalt Paving Technologists-Proceedings of the Technical Sessions, Sacramento, CA, USA, 7–10 March 2010; Association of Asphalt Paving Technologists (AAPT): Lino Lakes, MN, USA, 1924; Volume 79, pp. 35–73. [Google Scholar]
  18. Yun, T.; Kim, Y.R. Modeling of viscoplastic rate-dependent hardening-softening behavior of hot mix asphalt in compression. Mech. Time Depend. Mater. 2011, 15, 89–103. [Google Scholar] [CrossRef]
  19. Yun, T.; Kim, Y.R. A viscoplastic constitutive model for hot mix asphalt in compression at high confining pressure. Constr. Build. Mater. 2011, 25, 2733–2740. [Google Scholar] [CrossRef]
  20. Subramanian, V.; Guddati, M.N.; Kim, Y.R. A viscoplastic model for rate-dependent hardening for asphalt concrete in compression. Mech. Mater. 2013, 59, 142–159. [Google Scholar] [CrossRef]
  21. Cao, W.; Kim, Y.R. A viscoplastic model for the confined permanent deformation of asphalt concrete in compression. Mech. Mater 2015, 92, 235–247. [Google Scholar] [CrossRef]
  22. Underwood, S.B.; Kim, Y.R. Viscoelastoplastic continuum damage model for asphalt concrete in tension. J. Eng. Mech. 2011, 137, 732–739. [Google Scholar] [CrossRef]
  23. Yun, T.; Kim, Y.R. Viscoelastoplastic modeling of the behavior of hot mix asphalt in compression. KSCE J. Civ. Eng. 2013, 17, 1323–1332. [Google Scholar] [CrossRef]
  24. Di Benedetto, H.; Sauzéat, C.; Mondher, N.; Olard, F. Three-dimensional Thermo-viscoplastic Behaviour of Bituminous Materials: The DBN Model. Road Mater. Pavement Des. 2007, 8, 285–315. [Google Scholar] [CrossRef]
  25. Di Benedetto, H.; Sauzéat, C.; Clec’h, P. Anisotropy of bituminous mixture in the linear viscoelastic domain. Mech. Time Depend. Mater. 2016, 20, 281–297. [Google Scholar] [CrossRef]
  26. Masad, E.; Tashman, L.; Little, D.; Zbib, H. Viscoplastic modeling of asphalt mixes with the effects of anisotropy, damage and aggregate characteristics. Mech. Mater. 2005, 37, 1242–1256. [Google Scholar] [CrossRef]
  27. Masad, S.; Little, D.; Masad, E. Analysis of flexible pavement response and performance using isotropic and anisotropic material properties. J. Transp. Eng. 2006, 132, 342–349. [Google Scholar] [CrossRef]
  28. Masad, E.; Dessouky, S.; Little, D. Development of an elastoviscoplastic microstructural-based continuum model to predict permanent deformation in hot mix asphalt. Int. J. Geomech. 2007, 7, 119–130. [Google Scholar] [CrossRef]
  29. Saadeh, S.; Masad, E.; Little, D. Characterization of asphalt mix response under repeated loading using anisotropic nonlinear viscoelastic-viscoplastic model. J. Mater. Civ. Eng. 2007, 19, 912–924. [Google Scholar] [CrossRef]
  30. Saadeh, S.; Masad, E. On the relationship of microstructure properties of asphalt mixtures to their constitutive behavior. Int. J. Mater. Struct. Integr. 2010, 4, 186–214. [Google Scholar] [CrossRef]
  31. Darabi, M.K.; Abu Al-Rub, R.K.; Masad, E.A.; Huang, C.W.; Little, D.N. A thermo-viscoelastic-viscoplastic-viscodamage constitutive model for asphaltic materials. Int. J. Solids Struct. 2011, 48, 191–207. [Google Scholar] [CrossRef]
  32. Huang, C.W.; Abu Al-Rub, R.K.; Masad, E.A.; Little, D.N. Three-Dimensional Simulations of Asphalt Pavement Permanent Deformation Using a Nonlinear Viscoelastic and Viscoplastic Model. J. Mater. Civ. Eng. 2011, 23, 56–68. [Google Scholar] [CrossRef] [Green Version]
  33. Huang, C.W.; Abu Al-Rub, R.K.; Masad, E.A.; Little, D.N.; Airey, G.D. Numerical implementation and validation of a nonlinear viscoelastic and viscoplastic model for asphalt mixes. Int. J. Pavement Eng. 2011, 12, 433–447. [Google Scholar] [CrossRef]
  34. You, T.; Abu Al-Rub, R.K.; Darabi, M.K.; Masad, E.A.; Little, D.N. Three-dimensional microstructural modeling of asphalt concrete using a unified viscoelastic–viscoplastic–viscodamage model. Constr. Build. Mater. 2012, 28, 531–548. [Google Scholar] [CrossRef]
  35. Abu Al-Rub, R.K.; Darabi, M.K.; Huang, C.W.; Masad, E.A.; Little, D.N. Comparing finite element and constitutive modelling techniques for predicting rutting of asphalt pavements. Int. J. Pavement Eng. 2012, 13, 322–338. [Google Scholar] [CrossRef]
  36. Darabi, M.K.; Abu Al-Rub, R.K.; Masad, E.A.; Huang, C.W.; Little, D.N. A modified viscoplastic model to predict the permanent deformation of asphaltic materials under cyclic-compression loading at high temperatures. Int. J. Plast. 2012, 35, 100–134. [Google Scholar] [CrossRef]
  37. Darabi, M.K.; Abu Al-Rub, R.K.; Masad, E.A.; Little, D.N. Constitutive modeling of cyclic viscoplastic response of asphalt concrete. Transp. Res. Rec. 2013, 2373, 22–33. [Google Scholar] [CrossRef]
  38. Darabi, M.K.; Abu Al-Rub, R.K.; Masad, E.A.; Little, D.N. Cyclic Hardening-relaxation viscoplasticity model for asphalt concrete materials. J. Eng. Mech. 2013, 139, 832–847. [Google Scholar] [CrossRef]
  39. You, T.; Abu Al-Rub, R.K.; Masad, E.A.; Kassem, E.; Little, D.N. Three-dimensional microstructural modeling framework for dense-graded asphalt concrete using a coupled viscoelastic, viscoplastic, and viscodamage model. J. Mater. Civ. Eng. 2014, 26, 607–621. [Google Scholar] [CrossRef]
  40. You, T.; Masad, E.A.; Abu Al-Rub, R.K.; Kassem, E.; Little, D.N. Calibration and validation of a comprehensive constitutive model for asphalt mixtures. Transp. Res. Rec. 2014, 2447, 13–22. [Google Scholar] [CrossRef]
  41. Darabi, M.K.; Huang, C.W.; Bazzaz, M.; Masad, E.A.; Little, D.N. Characterization and validation of the nonlinear viscoelastic-viscoplastic with hardening-relaxation constitutive relationship for asphalt mixtures. Constr. Build. Mater. 2019, 216, 648–660. [Google Scholar] [CrossRef]
  42. Krishnan, J.M.; Rajagopal, K.R.; Masad, E.; Little, D.N. Thermomechanical Framework for the Constitutive Modeling of Asphalt Concrete. Int. J. Geomech. 2006, 6, 36–45. [Google Scholar] [CrossRef]
  43. Darabi, M.K.; Al-Rub, R.K.A.; Masad, E.A.; Little, D.N. Thermodynamic-based model for coupling temperature-dependent viscoelastic, viscoplastic, and viscodamage constitutive behavior of asphalt mixtures. Int. J. Numer. Anal. Methods Geomech. 2012, 36, 817–854. [Google Scholar] [CrossRef]
  44. Zhu, H.; Sun, L. A viscoelastic–viscoplastic damage constitutive model for asphalt mixtures based on thermodynamics. Int. J. Plast. 2013, 40, 81–100. [Google Scholar] [CrossRef]
  45. Chen, F.; Balieu, R.; Kringos, N. Thermodynamics-based finite strain viscoelastic-viscoplastic model coupled with damage for asphalt material. Int. J. Solids Struct. 2017, 129, 61–73. [Google Scholar] [CrossRef]
  46. Kringos, N.; Scarpas, T.; Kasbergen, C.; Selvadurai, P. Modelling of combined physical-mechanical moisture-induced damage in asphaltic mixes, Part 1: Governing processes and formulations. Int. J. Pavement Eng. 2008, 9, 115–128. [Google Scholar] [CrossRef]
  47. Shakiba, M.; Darabi, M.K.; Abu Al-Rub, R.K.; Masad, E.A.; Little, D.N. Microstructural modeling of asphalt concrete using a coupled moisture–mechanical constitutive relationship. Int. J. Solids Struct. 2014, 51, 4260–4279. [Google Scholar] [CrossRef]
  48. Balieu, R.; Kringos, N.; Chen, F.; Córdoba, E. Multiplicative viscoelastic-viscoplastic damage-healing model for asphalt-concrete materials. RILEM Bookseries 2016, 13, 235–240. [Google Scholar] [CrossRef]
  49. Rahmani, E.; Darabi, M.K.; Little, D.N.; Masad, E.A. Constitutive modeling of coupled aging-viscoelastic response of asphalt concrete. Constr. Build. Mater. 2017, 131, 1–15. [Google Scholar] [CrossRef]
  50. Oeser, M.; Scarpas, A.; Kasbergen, C.; Pellinen, T. Rheological Elements on the Basis of Fractional Derivatives. In Proceedings of the International Conference on Advanced Characterization of Pavement and Soil Engineering Materials, Athens, Greece, 20–22 June 2007; Volume 1, pp. 59–68. [Google Scholar]
  51. Pellinen, T.; Oeser, M.; Scarpas, A.; Kasbergen, C. Creep and Recovery of Non-linear Rheological Bodies. In Proceedings of the International Conference Advanced Characterization of Pavement and Soil Engineering Materials, Athens, Greece, 20–22 June 2007; Volume 1, pp. 49–58. [Google Scholar]
  52. Oeser, M.; Pellinen, T.; Scarpas, A.; Kasbergen, C. Studies on creep and recovery of rheological bodies based upon conventional and fractional formulations and their application on asphalt mixture. Int. J. Pavement Eng. 2008, 9, 373–386. [Google Scholar] [CrossRef]
  53. Celauro, C.; Fecarotti, C.; Pirrotta, A.; Collop, A.C. Experimental validation of a fractional model for creep/recovery testing of asphalt mixtures. Constr. Build. Mater. 2012, 36, 458–466. [Google Scholar] [CrossRef]
  54. Collop, A.C.; McDowell, G.R.; Lee, Y. Use of the distinct element method to model the deformation behavior of an idealized asphalt mixture. Int. J. Pavement Eng. 2004, 5, 1–7. [Google Scholar] [CrossRef]
  55. Dai, Q.; You, Z. Prediction of creep stiffness of asphalt mixture with micromechanical finite element and discrete element models. J. Eng. Mech. 2007, 133, 163–173. [Google Scholar] [CrossRef]
  56. Abbas, A.; Masad, E.; Papagiannakis, T.; Harman, T. Micromechanical modeling of the viscoelastic behavior of asphalt mixtures using the discrete-element method. Int. J. Geomech. 2007, 7, 131–139. [Google Scholar] [CrossRef]
  57. Dondi, G.; Simone, A.; Vignali, V.; Manganelli, G. Numerical and experimental study of granular mixes for asphalts. Powder Technol. 2012, 232, 31–40. [Google Scholar] [CrossRef]
  58. Kim, S.H.; Kim, N. Development of performance prediction models in flexible pavement using regression analysis method. KSCE J. Civ. Eng. 2006, 10, 91–96. [Google Scholar] [CrossRef]
  59. Laurinavičius, A.; Oginskas, R. Experimental research on the development of rutting in asphalt concrete pavements reinforced with geosynthetic materials. J. Civ. Eng. Manag. 2006, 12, 311–317. [Google Scholar] [CrossRef]
  60. Shukla, P.K.; Das, A. A re-visit to the development of fatigue and rutting equations used for asphalt pavement design. Int. J. Pavement Eng. 2008, 9, 355–364. [Google Scholar] [CrossRef]
  61. Asifur Rahman, A.S.M.; Mendez Larrain, M.M.; Tarefder, R.A. Development of a nonlinear rutting model for asphalt concrete based on Weibull parameters. Int. J. Pavement Eng. 2017, 1–10. [Google Scholar] [CrossRef]
  62. Specht, L.P.; Khatchatourian, O.; Brito, L.A.T.; Ceratti, J.A.P. Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks. Mater. Res. 2007, 10, 69–74. [Google Scholar] [CrossRef]
  63. Androjić, I.; Marović, I. Development of artificial neural network and multiple linear regression models in the prediction process of the hot mix asphalt properties. Can. J. Civ. Eng. 2017, 44, 994–1004. [Google Scholar] [CrossRef]
  64. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  65. Tapkın, S.; Çevik, A.; Uşar, Ü. Prediction of Marshall test results for polypropylene modified dense bituminous mixtures using neural networks. Expert Syst. Appl. 2010, 37, 4660–4670. [Google Scholar] [CrossRef]
  66. Baldo, N.; Manthos, E.; Pasetto, M. Analysis of the mechanical behaviour of asphalt concretes using artificial neural networks. Adv. Civ. Eng. 2018, 2018, 1650945. [Google Scholar] [CrossRef]
  67. Khodary, F. Comparative Study of Using Steel Slag Aggregate and Crushed limestone in Asphalt Concrete Mixtures. Int. J. Civ. Eng. Technol. 2015, 6, 73–82. [Google Scholar]
  68. Abdoli, M.A.; Fathollahi, A.; Babaei, R. The Application of Recycled Aggregates of Construction Debris in Asphalt Concrete Mix Design. Int. J. Environ. Res. 2015, 9, 489–494. [Google Scholar]
  69. Sarkar, D.; Pal, M.; Sarkar, A.K.; Mishra, U. Evaluation of the Properties of Bituminous Concrete Prepared from Brick-Stone Mix Aggregate. Adv. Mater. Sci. Eng. 2016, 2016, 2761038. [Google Scholar] [CrossRef]
  70. Zumrawi, M.M.E.; Khalill, F.O.A. Experimental Study of Steel Slag Used as Aggregate in Asphalt Mixture. Am. J. Constr. Build. Mater. 2017, 2, 26–32. [Google Scholar] [CrossRef]
  71. Salam Al-Ammari, M.A.; Jakarni, F.M.; Muniandy, R.; Hassim, S. The effect of aggregate and compaction method on the physical properties of hot mix asphalt. In IOP Conference Series: Materials Science and Engineering, Proceedings of the 10th Malaysian Road Conference and Exhibition 2018, Sunway Pyramid Convention CentrePetaling Jaya, Selangor, Malaysia, 29–31 October 2018; Institute of Physics: London, UK, 2018; Volume 512, p. 012003. [Google Scholar]
  72. Xiao, F.; Amirkhanian, S.N. Artificial neural network approach to estimating stiffness behaviour of rubberized asphalt concrete containing reclaimed asphalt pavement. J. Transp. Eng. 2009, 135, 580–589. [Google Scholar] [CrossRef]
  73. Tapkın, S.; Çevik, A.; Uşar, Ü. Accumulated strain prediction of polypropylene modified Marshall specimens in repeated creep test using artificial neural networks. Expert Syst. Appl. 2009, 36, 11186–11197. [Google Scholar] [CrossRef]
  74. Ozgan, E. Artificial neural network based modelling of the Marshall stability of asphalt concrete. Expert Syst. Appl. 2011, 38, 6025–6030. [Google Scholar] [CrossRef]
  75. Tapkın, S.; Çevik, A.; Özcan, Ş. Utilising neural networks and closed form solutions to determine static creep behaviour and optimal polypropylene amount in bituminous mixtures. Mater. Res. 2012, 15, 865–883. [Google Scholar] [CrossRef] [Green Version]
  76. Shafabakhsh, G.H.; Jafari Ani, O.; Talebsafa, M. Artificial neural network modeling (ANN) for predicting rutting performance of nano-modified hot-mix asphalt mixtures containing steel slag aggregates. Constr. Build. Mater. 2015, 85, 136–143. [Google Scholar] [CrossRef]
  77. Zavrtanik, N.; Prosen, J.; Tušar, M.; Turk, G. The use of artificial neural networks for modeling air void content in aggregate mixture. Autom. Constr. 2016, 63, 155–161. [Google Scholar] [CrossRef]
  78. Bishop, C.M. Pattern Recognition and Machine Learning, 1st ed.; Jordan, M., Kleinberg, J., Schölkopf, B., Eds.; Springer: New York, NY, USA, 2006; pp. 32–33, 46–48, 225–281. [Google Scholar]
  79. Hagan, M.T.; Demuth, H.B.; Beale, M.H.; De Jes’us, O. Neuron Model and Network Architectures. In Neural Network Design, 2nd ed.; Hagan, M.T., Ed.; PWS Publishing: Boston, MA, USA, 2014; pp. 1–23. [Google Scholar]
  80. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  81. Goodfellow, I.; Bengio, Y.; Courville, A. Machine Learning Basics. In Deep Learning, 1st ed.; MIT Press: Cambridge, MA, USA, 2016; pp. 98–145. [Google Scholar]
  82. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. Cross-Validation. In An Introduction to Statistical Learning: With Applications in R, 1st ed.; Casella, G., Fienberg, S., Olkin, I., Eds.; Springer: New York, NY, USA, 2013; Volume 103, pp. 176–186. [Google Scholar]
  83. Hastie, T.; Friedman, J.; Tibshirani, R. Model Assessment and Selection. In The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.; Springer: New York, NY, USA, 2009; pp. 219–249. [Google Scholar]
  84. Russel, S.; Norvig, P. Evaluating and Choosing the Best Hypothesis. In Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2009; pp. 708–713. [Google Scholar]
  85. Murphy, K.P. Some basic concept in machine learning. In Machine Learning: A Probabilistic Perspective, 1st ed.; MIT Press: Cambridge, MA, USA, 2012; pp. 22–24. [Google Scholar]
  86. Kuhn, M.; Johnson, K. Resampling Techniques. In Applied Predictive Modeling, 1st ed.; Springer: New York, NY, USA, 2013; pp. 69–72. [Google Scholar]
  87. Ramachandran, P.; Barret, Z.; Quoc, V.L. Searching for Activation Functions. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  88. Foresee, F.D.; Hagan, M.T. Gauss-Newton approximation to Bayesian regularization. In Proceedings of the International Conference on Neural Networks (ICNN’97), Houston, TX, USA, 12 June 1997; IEEE: Piscataway, NJ, USA, 1963. [Google Scholar] [CrossRef]
  89. Raschka, S. Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning. arXiv 2018, arXiv:1811.12808v2. [Google Scholar]
  90. Alas, M.; Ali, S.I.A. Prediction of the High-Temperature Performance of a Geopolymer Modified Asphalt Binder using Artificial Neural Networks. Int. J. Technol. 2019, 10, 417–427. [Google Scholar] [CrossRef]
  91. Patel, A.; Kulkarni, M.P.; Gumaste, S.D.; Bartake, P.P.; Rao, K.V.K.; Singh, D.N. A methodology for determination of resilient modulus of asphaltic concrete. Adv. Civ. Eng. 2011, 2011, 936395. [Google Scholar] [CrossRef]
  92. Bdour, A.N.; Khalayleh, Y.; Al-Omari, A.A. Assessing mechanical properties of hot mix asphalt with wire wool fibers. Adv. Civ. Eng. 2015, 2015, 795903. [Google Scholar] [CrossRef]
  93. Djakfar, L.; Bowoputro, H.; Prawiro, B.; Tarigan, N. Performance of recycled porous hot mix asphalt with gilsonite additive. Adv. Civ. Eng. 2015, 2015, 316719. [Google Scholar] [CrossRef]
  94. Saboo, N.; Kumar, P. Performance characterization of polymer modified asphalt binders and mixes. Adv. Civ. Eng. 2016, 2016, 5938270. [Google Scholar] [CrossRef]
Figure 1. Iterative cycle performed by the backpropagation algorithm.
Figure 1. Iterative cycle performed by the backpropagation algorithm.
Applsci 09 03502 g001
Figure 2. Representation of the k-fold cross-validation process.
Figure 2. Representation of the k-fold cross-validation process.
Applsci 09 03502 g002
Figure 3. Representation of a three-layer feedforward neural network.
Figure 3. Representation of a three-layer feedforward neural network.
Applsci 09 03502 g003
Figure 4. Grading curves of HMA20-50/70-Lm and HMA20-SBS-Lm.
Figure 4. Grading curves of HMA20-50/70-Lm and HMA20-SBS-Lm.
Applsci 09 03502 g004
Figure 5. Grading curves of HMA20-50/70-Db and HMA20-SBS-Db.
Figure 5. Grading curves of HMA20-50/70-Db and HMA20-SBS-Db.
Applsci 09 03502 g005
Figure 6. ANN final architecture to predict the Marshall stability or the stiffness modulus.
Figure 6. ANN final architecture to predict the Marshall stability or the stiffness modulus.
Applsci 09 03502 g006
Figure 7. ANN final architecture to predict the Marshall quotient.
Figure 7. ANN final architecture to predict the Marshall quotient.
Applsci 09 03502 g007
Figure 8. Training result for the stiffness modulus ANN model.
Figure 8. Training result for the stiffness modulus ANN model.
Applsci 09 03502 g008
Figure 9. Training result for Marshall Stability ANN model.
Figure 9. Training result for Marshall Stability ANN model.
Applsci 09 03502 g009
Figure 10. Training result for the Marshall quotient ANN model.
Figure 10. Training result for the Marshall quotient ANN model.
Applsci 09 03502 g010
Table 1. Case studies considered in the model selection procedure.
Table 1. Case studies considered in the model selection procedure.
Number of Hidden LayersNumber of Neurons in Each Hidden Layer
68101214
1Case 1Case 2Case 3Case 4Case 5
2Case 6Case 7Case 8Case 9Case 10
3Case 11Case 12Case 13Case 14Case 15
Table 2. Limestone and diabase aggregates’ characterization results.
Table 2. Limestone and diabase aggregates’ characterization results.
PropertyAggregate Type
LimestoneDiabase
Los Angeles coefficient (%), EN 1097-22925
Polished Stone Value (%), EN 1097-8-55 to 60
Flakiness Index (%), EN 933-32318
Sand equivalent (%), EN 933-87959
Methylene blue value (mg/g), EN 933-93.38.3
Table 3. Bitumen characterization results.
Table 3. Bitumen characterization results.
PropertyBitumen Type
50/70SBS Modified
Penetration (0.1 × mm), EN 14266445
Softening point (°C), EN 142745.678.8
Elastic recovery (%), EN 13398-97.5
Fraas breaking point (°C), EN 12593−7.0−15.0
After aging, EN 12607-1 (RTFOT):
Retained penetration-84
Difference in softening point (°C)-−2.4
Table 4. Estimates of stiffness modulus models’ predictive performance by five-fold cross validation.
Table 4. Estimates of stiffness modulus models’ predictive performance by five-fold cross validation.
Number of Hidden LayersTest Performance IndicatorsPositive Linear Transfer Function
Number of Neurons in Each Hidden Layer
68101214
1Rcv0.8460.8500.8480.8460.868
MSEcv202,780196,190202,814203,437175,946
σcv46,84736,33957,88551,69053,240
Δ59,08748,98856,14857,79772,179
2Rcv0.8540.8820.8650.8910.869
MSEcv191,976156,746176,240145,174166,594
σcv33,19256,42223,82939,20521,230
Δ48,56989,53663,07674,87279,697
3Rcv0.8770.8810.8740.8610.853
MSEcv160,046158,193162,541183,040185,407
σcv34,82528,28269,86022,15536,253
Δ79,94834,14287,02180,07584,381
Number of hidden layersTest performance indicatorsHyperbolic tangent transfer function
Number of neurons in each hidden layer
68101214
1Rcv0.8870.8820.8780.8740.886
MSEcv154,281161,179169,033175835163,735
σcv69,85351,88572,3737734173,525
Δ103,312116,789128,318134397124,963
2Rcv0.8500.8420.8410.8430.871
MSEcv224,187216,034217,966214,564180,046
σcv81,61289,56080,72785,94778,098
Δ190,187185,333191,538188,164149,247
3Rcv0.8470.8460.8270.8340.827
MSEcv220,608219,592253,499235,365260,324
σcv97,328100,695118,710101,692137,263
Δ194,246192,887229,045213,887235,464
Table 5. Estimates of Marshall stability models’ predictive performance by five-fold cross validation.
Table 5. Estimates of Marshall stability models’ predictive performance by five-fold cross validation.
Number of Hidden LayersTest Performance IndicatorsPositive Linear Transfer Function
Number of Neurons in Each Hidden Layer
68101214
1Rcv0.7870.8090.7970.7990.793
MSEcv1.0710.9641.0341.0281.062
σcv0.2010.1970.2510.2700.183
Δ0.5020.3290.4860.4430.476
2Rcv0.8210.8170.8080.8110.830
MSEcv0.9040.9210.9640.9630.882
σcv0.3300.1990.1620.2560.249
Δ0.3230.2820.4610.4900.394
3Rcv0.7950.8340.8160.8100.830
MSEcv1.0240.8510.9840.9430.873
σcv0.2150.1700.3350.3400.268
Δ0.5020.3990.4540.4650.408
Number of hidden layersTest performance indicatorsHyperbolic tangent transfer function
Number of neurons in each hidden layer
68101214
1Rcv0.8070.8230.8080.8120.820
MSEcv1.0890.9991.0651.0761.025
σcv0.3770.3790.3260.3570.448
Δ0.6710.6310.6450.6830.626
2Rcv0.8280.8260.8170.7940.812
MSEcv0.9740.9881.0261.4421.032
σcv0.4020.3920.4340.8600.351
Δ0.6530.6810.7341.2050.753
3Rcv0.5050.7410.5910.4200.544
MSEcv1.6352.1582.3742.0373.063
σcv0.5711.1330.8621.2340.867
Δ0.9610.4480.7390.2840.862
Table 6. Estimates of Marshall quotient models’ predictive performance by five-fold cross validation.
Table 6. Estimates of Marshall quotient models’ predictive performance by five-fold cross validation.
Number of Hidden LayersTest Performance IndicatorsPositive Linear Transfer Function
Number of Neurons in Each Hidden Layer
68101214
1Rcv0.8530.8500.8530.8590.856
MSEcv0.3090.3080.3110.2920.297
σcv0.0770.0630.0440.0510.036
Δ0.1100.1020.1070.0920.099
2Rcv0.8510.8480.8550.8520.843
MSEcv0.3090.3160.3010.3260.309
σcv0.0570.0620.0550.0420.060
Δ0.1270.1390.1160.1120.132
3Rcv0.6720.8390.8380.8420.842
MSEcv0.5080.3200.3210.3140.315
σcv0.3930.0540.0790.0520.054
Δ0.1360.1180.1390.1350.148
Number of hidden layersTest performance indicatorsHyperbolic tangent transfer function
Number of neurons in each hidden layer
68101214
1Rcv0.8550.8570.8550.8560.856
MSEcv0.3170.3140.3170.3120.316
σcv0.0870.0780.0770.0750.083
Δ0.1790.1760.1780.1710.179
2Rcv0.8580.8560.8580.8580.855
MSEcv0.3170.3200.3200.3170.326
σcv0.0880.1080.0860.1050.091
Δ0.1870.1950.1910.1940.196
3Rcv0.6740.6730.5480.6810.483
MSEcv0.5250.5440.6790.6531.133
σcv0.3900.4710.5950.3580.493
Δ0.1940.2290.1370.1010.185
Table 7. Effect of data sample variability on the stiffness modulus model’s performance.
Table 7. Effect of data sample variability on the stiffness modulus model’s performance.
Model PerformanceStatistical IndicatorsNumbers of Neurons in Each Hidden Layer
68101214
TestTrainTestTrainTestTrainTestTrainTestTrain
BestMSE60,86377,33788,39968,54088,04668,60762,27664,22149,39063,133
R0.9700.9420.9500.9490.9430.9510.9140.9600.9850.949
WorstMSE476,72346,517387,52233,425550,32347,722503,81553,480333,32812,046
R0.7160.9700.7280.9770.6400.9680.5600.9640.7380.992
Test MSE standard deviation130,512 86,718 138,381 151,325 97,811
Table 8. Effect of data sample variability on the Marshall stability model’s performance.
Table 8. Effect of data sample variability on the Marshall stability model’s performance.
Model PerformanceStatistical IndicatorsNumbers of Neurons in Each Hidden Layer
68101214
TestTrainTestTrainTestTrainTestTrainTestTrain
BestMSE0.6770.4420.7120.4220.8040.4910.4270.5470.5600.510
R0.8570.9180.8680.9250.8540.9050.9230.8950.9050.900
WorstMSE1.7140.3051.7510.3071.9090.2483.6070.2542.0230.396
R0.5490.9440.7750.9410.7540.9530.4380.9490.4690.926
Test MSE standard deviation0.371 0.395 0.364 0.872 0.513
Table 9. Effect of data sample variability on the Marshall quotient model’s performance.
Table 9. Effect of data sample variability on the Marshall quotient model’s performance.
Model PerformanceStatistical IndicatorsNumbers of neurons in Each Hidden Layer
68101214
TestTrainTestTrainTestTrainTestTrainTestTrain
BestMSE0.1970.1630.2070.1570.2080.1610.2040.1580.2170.159
R0.8970.9330.9190.9330.9010.9290.9120.9330.9070.934
WorstMSE0.6210.1020.7290.1370.4840.1340.6050.1130.6330.103
R0.7430.9560.7670.9390.8540.9360.7610.9510.7210.955
Test MSE standard deviation0.120 0.161 0.123 0.107 0.132

Share and Cite

MDPI and ACS Style

Baldo, N.; Manthos, E.; Miani, M. Stiffness Modulus and Marshall Parameters of Hot Mix Asphalts: Laboratory Data Modeling by Artificial Neural Networks Characterized by Cross-Validation. Appl. Sci. 2019, 9, 3502. https://doi.org/10.3390/app9173502

AMA Style

Baldo N, Manthos E, Miani M. Stiffness Modulus and Marshall Parameters of Hot Mix Asphalts: Laboratory Data Modeling by Artificial Neural Networks Characterized by Cross-Validation. Applied Sciences. 2019; 9(17):3502. https://doi.org/10.3390/app9173502

Chicago/Turabian Style

Baldo, Nicola, Evangelos Manthos, and Matteo Miani. 2019. "Stiffness Modulus and Marshall Parameters of Hot Mix Asphalts: Laboratory Data Modeling by Artificial Neural Networks Characterized by Cross-Validation" Applied Sciences 9, no. 17: 3502. https://doi.org/10.3390/app9173502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop