Figure 1.
KNN algorithm for predictor or dependent variables and .
Figure 1.
KNN algorithm for predictor or dependent variables and .
Figure 2.
Diagram of an artificial neuron.
Figure 2.
Diagram of an artificial neuron.
Figure 3.
Example of MLP network.
Figure 3.
Example of MLP network.
Figure 4.
Flowchart of the RF algorithm.
Figure 4.
Flowchart of the RF algorithm.
Figure 5.
Flowchart of the SVR algorithm.
Figure 5.
Flowchart of the SVR algorithm.
Figure 6.
Random subsampling and k-fold cross-validation procedures. (a) A 10 random subsampling procedure by randomly partitioning the sample into 75% for training and 25% for testing; (b) k-fold cross-validation procedure, example with .
Figure 6.
Random subsampling and k-fold cross-validation procedures. (a) A 10 random subsampling procedure by randomly partitioning the sample into 75% for training and 25% for testing; (b) k-fold cross-validation procedure, example with .
Figure 7.
Ranking a set of models. (a) Discrete ranking; (b) continuous ranking.
Figure 7.
Ranking a set of models. (a) Discrete ranking; (b) continuous ranking.
Figure 8.
Comparison of KNN models, ; mean values of statistical measures on the test data for the 10 random models (75:25 ratio). (a) RMSE; (b) MAE; (c) MAPE.
Figure 8.
Comparison of KNN models, ; mean values of statistical measures on the test data for the 10 random models (75:25 ratio). (a) RMSE; (b) MAE; (c) MAPE.
Figure 9.
Comparison of KNN models, ; RMSE for the training and test data of the 10 random models (75:25 ratio), with the Manhattan distance. (a) Training; (b) test.
Figure 9.
Comparison of KNN models, ; RMSE for the training and test data of the 10 random models (75:25 ratio), with the Manhattan distance. (a) Training; (b) test.
Figure 10.
Relative error of the MLP networks for varying numbers of neurons in the hidden layer; number of iterations , ReLU activation function, L-BFGS solver. (a) (without L2 regularisation); (b) ; (c) ; (d) .
Figure 10.
Relative error of the MLP networks for varying numbers of neurons in the hidden layer; number of iterations , ReLU activation function, L-BFGS solver. (a) (without L2 regularisation); (b) ; (c) ; (d) .
Figure 11.
Behaviour of RF models for varying proportions of samples for training each base estimator; maximum depth , minimum of 20 cases for parent nodes and 10 for child nodes, number of trees . (a) RMSE, training dataset; (b) RMSE, test dataset; (c) MAE, training dataset; (d) MAE, test dataset.
Figure 11.
Behaviour of RF models for varying proportions of samples for training each base estimator; maximum depth , minimum of 20 cases for parent nodes and 10 for child nodes, number of trees . (a) RMSE, training dataset; (b) RMSE, test dataset; (c) MAE, training dataset; (d) MAE, test dataset.
Figure 12.
Behaviour of RF models for varying numbers of trees; maximum depth , minimum of 20 cases for parent nodes and 10 for child nodes, proportion of samples for training each base estimator . (a) RMSE, number of trees between 10 and 1000; (b) RMSE, number of trees between two and 1000.
Figure 12.
Behaviour of RF models for varying numbers of trees; maximum depth , minimum of 20 cases for parent nodes and 10 for child nodes, proportion of samples for training each base estimator . (a) RMSE, number of trees between 10 and 1000; (b) RMSE, number of trees between two and 1000.
Figure 13.
Behaviour of RF models for varying maximum depths; proportion of samples for training each base estimator = 0.75. (a) RMSE, minimum of two cases for parent nodes and one for child nodes; (b) RMSE, minimum of 20 cases for parent nodes and 10 for child nodes.
Figure 13.
Behaviour of RF models for varying maximum depths; proportion of samples for training each base estimator = 0.75. (a) RMSE, minimum of two cases for parent nodes and one for child nodes; (b) RMSE, minimum of 20 cases for parent nodes and 10 for child nodes.
Figure 14.
Behaviour of RF models for varying numbers of minimum cases for parent and child nodes; maximum depth , percentage of samples for training each base estimator = 0.75. (a) Minimum of two cases for parent nodes and one for child nodes; (b) minimum of 10 cases for parent nodes and five for child nodes; (c) minimum of 20 cases for parent nodes and 10 for child nodes; (d) minimum of 40 cases for parent nodes and 20 for child nodes; (e) minimum of 50 cases for parent nodes and 25 for child nodes.
Figure 14.
Behaviour of RF models for varying numbers of minimum cases for parent and child nodes; maximum depth , percentage of samples for training each base estimator = 0.75. (a) Minimum of two cases for parent nodes and one for child nodes; (b) minimum of 10 cases for parent nodes and five for child nodes; (c) minimum of 20 cases for parent nodes and 10 for child nodes; (d) minimum of 40 cases for parent nodes and 20 for child nodes; (e) minimum of 50 cases for parent nodes and 25 for child nodes.
Figure 15.
Gradient-boosted tree models with varying numbers of stages or built trees, for a minimum of two cases for parent nodes and one for child nodes; maximum depth = 3. (a) RMSE, learning rate ; (b) MAE, learning rate ; (c) RMSE, learning rate ; (d) MAE, learning rate .
Figure 15.
Gradient-boosted tree models with varying numbers of stages or built trees, for a minimum of two cases for parent nodes and one for child nodes; maximum depth = 3. (a) RMSE, learning rate ; (b) MAE, learning rate ; (c) RMSE, learning rate ; (d) MAE, learning rate .
Figure 16.
Gradient-boosted tree models with varying learning rates, for a minimum of two cases for parent nodes and one for child nodes; maximum depth . (a) RMSE, number of estimators ; (b) MAE, number of estimators ; (c) RMSE, number of estimators ; (d) MAE, number of estimators .
Figure 16.
Gradient-boosted tree models with varying learning rates, for a minimum of two cases for parent nodes and one for child nodes; maximum depth . (a) RMSE, number of estimators ; (b) MAE, number of estimators ; (c) RMSE, number of estimators ; (d) MAE, number of estimators .
Figure 17.
Gradient-boosted tree models with varying pruning parameters; learning rate , proportion of samples for training each base learner . (a) RMSE. Minimum of two cases for parent nodes and one for child nodes; (b) RMSE. Minimum of 20 cases for parent nodes and 10 for child nodes.
Figure 17.
Gradient-boosted tree models with varying pruning parameters; learning rate , proportion of samples for training each base learner . (a) RMSE. Minimum of two cases for parent nodes and one for child nodes; (b) RMSE. Minimum of 20 cases for parent nodes and 10 for child nodes.
Figure 18.
Results for XGBoost models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling, maximum depth , learning rate , minimum sum of instance weight needed in a child node , proportion of samples for training each base estimator. (a) RMSE, ; (b) MAE, ; (c) RMSE, ; (d) MAE, .
Figure 18.
Results for XGBoost models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling, maximum depth , learning rate , minimum sum of instance weight needed in a child node , proportion of samples for training each base estimator. (a) RMSE, ; (b) MAE, ; (c) RMSE, ; (d) MAE, .
Figure 19.
Results for XGBoost models with varying ; learning rate , number of estimators , maximum depth , , minimum sum of instance weight needed in a child node , dropout rate , . (a) RMSE, training dataset; (b) RMSE, test dataset; (c) RMSE, all datasets.
Figure 19.
Results for XGBoost models with varying ; learning rate , number of estimators , maximum depth , , minimum sum of instance weight needed in a child node , dropout rate , . (a) RMSE, training dataset; (b) RMSE, test dataset; (c) RMSE, all datasets.
Figure 20.
Results for XGBoost models with varying dropout rate; learning rate , number of estimators , maximum depth , , minimum sum of instance weight needed in a child node , , . (a) RMSE, training dataset; (b) RMSE, test dataset; (c) RMSE, all datasets.
Figure 20.
Results for XGBoost models with varying dropout rate; learning rate , number of estimators , maximum depth , , minimum sum of instance weight needed in a child node , , . (a) RMSE, training dataset; (b) RMSE, test dataset; (c) RMSE, all datasets.
Figure 21.
RMSE for the SVR models; RBF kernel, . (a) ; (b) .
Figure 21.
RMSE for the SVR models; RBF kernel, . (a) ; (b) .
Figure 22.
RMSE for the SVR models for varying values of C; RBF kernel. (a) , , ; (b) , , ; (c) , , ; (d) , , .
Figure 22.
RMSE for the SVR models for varying values of C; RBF kernel. (a) , , ; (b) , , ; (c) , , ; (d) , , .
Figure 23.
Flowchart of the WAE method.
Figure 23.
Flowchart of the WAE method.
Figure 24.
Results for WAE models; , proportion of samples for training each base learner, learning rate , , . (a) RMSE, training dataset; (b) RMSE, test dataset.
Figure 24.
Results for WAE models; , proportion of samples for training each base learner, learning rate , , . (a) RMSE, training dataset; (b) RMSE, test dataset.
Figure 25.
Comparison of the KNN, MLP, RF, GBR, XGBoost and SVR models (see
Table 15 for the
x-axis labels). (
a) MAE; (
b) RMSE.
Figure 25.
Comparison of the KNN, MLP, RF, GBR, XGBoost and SVR models (see
Table 15 for the
x-axis labels). (
a) MAE; (
b) RMSE.
Figure 26.
Comparison of the GBR, XGBoost and WAE models (see
Table 15 for the
x-axis labels). (
a) MAE, test dataset; (
b) MAE, training and all dataset; (
c) RMSE, test dataset; (
d) RMSE, training and all dataset; (
e) MAPE, test dataset; (
f) MAPE, training and all dataset.
Figure 26.
Comparison of the GBR, XGBoost and WAE models (see
Table 15 for the
x-axis labels). (
a) MAE, test dataset; (
b) MAE, training and all dataset; (
c) RMSE, test dataset; (
d) RMSE, training and all dataset; (
e) MAPE, test dataset; (
f) MAPE, training and all dataset.
Figure 27.
Ranking of the models of
Table 15. (
a) Discrete ranking; (
b) continuous ranking.
Figure 27.
Ranking of the models of
Table 15. (
a) Discrete ranking; (
b) continuous ranking.
Figure 28.
Normalised feature importance for RF, GBR and XGBoost models. (a) RF model; (b) GBR model; (c) XGBoost model.
Figure 28.
Normalised feature importance for RF, GBR and XGBoost models. (a) RF model; (b) GBR model; (c) XGBoost model.
Figure 29.
Normalised feature permutation importance for KNN, SVR and MLP models. (a) KNN model; (b) SVR model; (c) MLP model.
Figure 29.
Normalised feature permutation importance for KNN, SVR and MLP models. (a) KNN model; (b) SVR model; (c) MLP model.
Table 1.
Characteristics of types of LWAC. “Adapted with permission from Ref. [
19]. Copyright 2019, Elsevier”.
Table 1.
Characteristics of types of LWAC. “Adapted with permission from Ref. [
19]. Copyright 2019, Elsevier”.
LWAC Type | LWA Particle Density (kg/m3) | LWAC Fixed Density (kg/m3) |
---|
1 | 482 | 1700 |
2 | 482 | 1900 |
3 | 1019 | 1700 |
4 | 1019 | 1900 |
Table 2.
Variables of the experimental dataset. “Adapted with permission from Ref. [
6]. Copyright 2018, Elsevier”.
Table 2.
Variables of the experimental dataset. “Adapted with permission from Ref. [
6]. Copyright 2018, Elsevier”.
Variable Description | Min | Max | Mean | Median | | |
---|
LWAC fixed density (kg/m3) | 1700 | 1900 | 1800 | 1800 | 1700 | 1900 |
LWA particle density (kg/m3) | 482 | 1019 | 750.5 | 750.5 | 482 | 1019 |
Concrete laying time (min) | 15 | 90 | 48.75 | 45.00 | 18.75 | 82.50 |
Vibration time (s) | 0 | 80 | 30 | 20 | 10 | 40 |
Experimental dry density (kg/m3) | 1069.80 | 2486.84 | 1673.35 | 1677.15 | 1533.35 | 1810.84 |
P-wave velocity (m/s) | 3044.25 | 5253.73 | 3778.89 | 3718.49 | 3520.48 | 3945.65 |
Segregation index | 0.845 | 1.136 | 1 | 0.999 | 0.978 | 1.021 |
Compressive strength (MPa) | 2.99 | 50.72 | 21.55 | 20.25 | 14.37 | 28.76 |
Table 3.
Statistical measures for the best MLR and RT models in [
19]. “Adapted with permission from Ref. [
19]. Copyright 2019, Elsevier”.
Table 3.
Statistical measures for the best MLR and RT models in [
19]. “Adapted with permission from Ref. [
19]. Copyright 2019, Elsevier”.
Technique | Model | Measures | Estimate |
---|
MLR | Without | | 0.767 |
validation | MAE | 3.394 |
| MAPE | 19.04 |
| RMSE | 4.327 |
Best model | | 0.766 |
with validation | MAE | 3.396 |
| MAPE | 18.86 |
| RMSE | 4.332 |
CHAID | Without | | 0.829 |
validation | MAE | 2.829 |
| MAPE | 15.49 |
| RMSE | 3.705 |
Best model | | 0.820 |
with validation | MAE | 2.928 |
| MAPE | 16.22 |
| RMSE | 3.808 |
Table 4.
Statistical measures for the best ANN model in [
6]. “Adapted with permission from Ref. [
6]. Copyright 2018, Elsevier”.
Table 4.
Statistical measures for the best ANN model in [
6]. “Adapted with permission from Ref. [
6]. Copyright 2018, Elsevier”.
Measures | Estimate |
---|
| 0.825 |
MAE | 2.897 |
MAPE | 15.85 |
RMSE | 3.745 |
Table 5.
Statistical measures on the test data for KNN models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling.
Table 5.
Statistical measures on the test data for KNN models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling.
Model | St. Measures | Median | Mean | Confidence Interval (95%) |
---|
Manhattan | MAE | 3.3871 | 3.3158 | [3.1676, 3.4640] |
distance | MAPE | 17.7148 | 17.8784 | [16.7483, 19.0085] |
| RMSE | 4.3912 | 4.3852 | [4.1707, 4.5997] |
Euclidean | MAE | 3.4199 | 3.4384 | [3.3320, 3.5448] |
distance | MAPE | 18.0707 | 18.5220 | [17.6788, 19.3652] |
| RMSE | 4.4556 | 4.4921 | [4.3254, 4.6588] |
Manhattan | MAE | 3.3981 | 3.3318 | [3.1932, 3.4704] |
distance | MAPE | 17.5864 | 17.9849 | [16.8763, 19.0935] |
| RMSE | 4.3845 | 4.3733 | [4.1709, 4.5757] |
Euclidean | MAE | 3.4646 | 3.4592 | [3.3369, 3.5815] |
distance | MAPE | 18.3423 | 18.6691 | [17.6019, 19.7363] |
| RMSE | 4.4300 | 4.4808 | [4.3001, 4.6615] |
Manhattan | MAE | 3.3550 | 3.3134 | [3.1579, 3.4689] |
distance | MAPE | 17.4759 | 17.9372 | [16.8324, 19.0420] |
| RMSE | 4.3177 | 4.3365 | [4.1150, 4.5580] |
Euclidean | MAE | 3.4203 | 3.4474 | [3.3160, 3.5788] |
distance | MAPE | 18.2648 | 18.6589 | [17.5924, 19.7254] |
| RMSE | 4.4611 | 4.4920 | [4.3151, 4.6689] |
Table 6.
Statistical measures for KNN models with the Manhattan distance.
Table 6.
Statistical measures for KNN models with the Manhattan distance.
Model | Measures | K = 5 | K = 7 |
---|
Without | | 0.8365 | 0.8115 |
validation | MAE | 2.7562 | 2.9255 |
| MAPE | 14.5033 | 15.5158 |
| RMSE | 3.6246 | 3.8916 |
With validation | | 0.8038 | 0.7930 |
(mean of 10 models) | MAE | 3.0006 | 3.1044 |
| MAPE | 15.9617 | 16.6643 |
| RMSE | 3.9694 | 4.0775 |
With validation | | 0.8120 | 0.7974 |
(best model) | MAE | 2.9567 | 3.0597 |
| MAPE | 15.6981 | 16.3948 |
| RMSE | 3.8863 | 4.0340 |
Table 7.
Statistical measures for MLP networks; ReLU activation function, L-BFGS solver.
Table 7.
Statistical measures for MLP networks; ReLU activation function, L-BFGS solver.
Number of Neurons, | Measures | Mean of Test Sets | with Validation (Mean of 10 Models) |
---|
| | 0.8014 | 0.8043 |
8, | MAE | 3.0467 | 3.0404 |
| MAPE | 16.7511 | 16.6353 |
| RMSE | 3.9617 | 3.9628 |
| | 0.8045 | 0.8290 |
30, | MAE | 2.9960 | 2.8415 |
| MAPE | 16.3343 | 15.4097 |
| RMSE | 3.9328 | 3.7050 |
| | 0.8032 | 0.8317 |
40, | MAE | 2.9536 | 2.7956 |
| MAPE | 15.9433 | 15.0795 |
| RMSE | 3.9437 | 3.6765 |
Table 8.
Results for RF models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling, maximum depth = 5, proportion of samples for training each base estimator . Statistical measures for test data: (a) minimum of two cases for parent nodes and one for child nodes, (b) minimum of 10 cases for parent nodes and five for child nodes, (c) minimum of 20 cases for parent nodes and 10 for child nodes, (d) minimum of 40 cases for parent nodes and 20 for child nodes, (e) minimum of 50 cases for parent nodes and 25 for child nodes.
Table 8.
Results for RF models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling, maximum depth = 5, proportion of samples for training each base estimator . Statistical measures for test data: (a) minimum of two cases for parent nodes and one for child nodes, (b) minimum of 10 cases for parent nodes and five for child nodes, (c) minimum of 20 cases for parent nodes and 10 for child nodes, (d) minimum of 40 cases for parent nodes and 20 for child nodes, (e) minimum of 50 cases for parent nodes and 25 for child nodes.
Model | St. Measures | Median | Mean | Confidence Interval (95%) |
---|
(a) | MAE | 2.9328 | 2.9530 | [2.8378, 3.0682] |
| MAPE | 16.2147 | 16.2340 | [15.4197, 17.0483] |
| RMSE | 3.8316 | 3.8239 | [3.6988, 3.9490] |
(b) | MAE | 2.9556 | 2.9557 | [2.8570, 3.0544] |
| MAPE | 16.3162 | 16.4000 | [15.5729, 17.2271] |
| RMSE | 3.8380 | 3.8279 | [3.7241, 3.9317] |
(c) | MAE | 2.9492 | 2.9809 | [2.8953, 3.0665] |
| MAPE | 16.7791 | 16.7259 | [15.8576, 17.5942] |
| RMSE | 3.8571 | 3.8434 | [3.7560, 3.9308] |
(d) | MAE | 3.1185 | 3.1390 | [3.0210, 3.2570] |
| MAPE | 17.5578 | 17.7231 | [16.7236, 18.7226] |
| RMSE | 4.0653 | 4.0443 | [3.9077, 4.1809] |
(e) | MAE | 3.1682 | 3.1887 | [3.0705, 3.3069] |
| MAPE | 17.7840 | 18.0240 | [16.9883, 19.0597] |
| RMSE | 4.1343 | 4.1074 | [3.9662, 4.2486] |
Table 9.
Statistical measures for RF; minimum of 20 cases for parent nodes and 10 for child nodes, maximum depth , proportion of samples for training each base estimator.
Table 9.
Statistical measures for RF; minimum of 20 cases for parent nodes and 10 for child nodes, maximum depth , proportion of samples for training each base estimator.
p | Measures | Mean of Test Sets | with Validation (Mean of 10 Models) |
---|
| | 0.8135 | 0.8316 |
0.75 | MAE | 2.9809 | 2.8207 |
| MAPE | 16.7259 | 15.7916 |
| RMSE | 3.8434 | 3.6784 |
| | 0.8125 | 0.8343 |
0.9 | MAE | 2.9867 | 2.7955 |
| MAPE | 16.7081 | 15.6373 |
| RMSE | 3.8534 | 3.6486 |
| | 0.8119 | 0.8359 |
1 | MAE | 2.9925 | 2.7808 |
| MAPE | 16.7218 | 15.5416 |
| RMSE | 3.8603 | 3.6303 |
Table 10.
Statistical measures for gradient-boosted tree models; learning rate , number of estimators , maximum depth , proportion of samples for training each base learner; (a) minimum of two cases for parent nodes and one for child nodes, (b) minimum of 20 cases for parent nodes and 10 for child nodes.
Table 10.
Statistical measures for gradient-boosted tree models; learning rate , number of estimators , maximum depth , proportion of samples for training each base learner; (a) minimum of two cases for parent nodes and one for child nodes, (b) minimum of 20 cases for parent nodes and 10 for child nodes.
Model | Measures | Mean of Test Sets | with Validation (Mean of 10 Models) |
---|
(a) | | 0.8185 | 0.8762 |
| MAE | 2.9183 | 2.4472 |
| MAPE | 16.2005 | 13.6766 |
| RMSE | 3.7921 | 3.1528 |
(b) | | 0.8204 | 0.8646 |
| MAE | 2.8988 | 2.5283 |
| MAPE | 16.1394 | 14.1537 |
| RMSE | 3.7725 | 3.2979 |
(a) | | 0.8182 | 0.8765 |
| MAE | 2.9209 | 2.4396 |
| MAPE | 16.2251 | 13.6751 |
| RMSE | 3.7959 | 3.1500 |
(b) | | 0.8201 | 0.8590 |
| MAE | 2.9032 | 2.5825 |
| MAPE | 16.2201 | 14.4576 |
| RMSE | 3.7747 | 3.3650 |
(a) | | 0.8178 | 0.8732 |
| MAE | 2.9338 | 2.4690 |
| MAPE | 16.2570 | 13.8063 |
| RMSE | 3.7990 | 3.1907 |
(b) | | 0.8175 | 0.8632 |
| MAE | 2.9318 | 2.5466 |
| MAPE | 16.3024 | 14.2430 |
| RMSE | 3.8021 | 3.3145 |
Table 11.
Results for XGBoost models; learning rate , number of estimators , maximum depth , proportion of samples for training each base learner, minimum sum of instance weight needed in a child node .
Table 11.
Results for XGBoost models; learning rate , number of estimators , maximum depth , proportion of samples for training each base learner, minimum sum of instance weight needed in a child node .
Model | Measures | Mean of Test Set | with Validation (Mean of 10 Models) |
---|
| | 0.8195 | 0.8772 |
| MAE | 2.9042 | 2.4375 |
| MAPE | 16.1032 | 13.6244 |
| RMSE | 3.7821 | 3.1402 |
| | 0.8196 | 0.8773 |
| MAE | 2.9020 | 2.4371 |
| MAPE | 16.0921 | 13.6238 |
| RMSE | 3.7812 | 3.1396 |
| | 0.8172 | 0.8761 |
| MAE | 2.9261 | 2.4408 |
| MAPE | 16.2213 | 13.6526 |
| RMSE | 3.806 | 3.1545 |
| | 0.8175 | 0.8761 |
| MAE | 2.9256 | 2.4416 |
| MAPE | 16.2244 | 13.6555 |
| RMSE | 3.8027 | 3.1549 |
| | 0.8172 | 0.8731 |
| MAE | 2.9356 | 2.4693 |
| MAPE | 16.2774 | 13.8164 |
| RMSE | 3.8051 | 3.1926 |
| | 0.8167 | 0.8730 |
| MAE | 2.9393 | 2.4698 |
| MAPE | 16.3016 | 13.8291 |
| RMSE | 3.8104 | 3.1936 |
Table 12.
Statistical measures on the test data for SVR models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling, RBF kernel, , .
Table 12.
Statistical measures on the test data for SVR models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling, RBF kernel, , .
C | Measures | Mean of Test Sets | with Validation (Mean of 10 Models) |
---|
| | 0.8120 | 0.8237 |
0.3 | MAE | 3.0000 | 2.8778 |
| MAPE | 16.2627 | 15.5709 |
| RMSE | 3.8586 | 3.7637 |
| | 0.8124 | 0.8272 |
0.4 | MAE | 2.9944 | 2.852 |
| MAPE | 16.1798 | 15.4092 |
| RMSE | 3.8543 | 3.7252 |
| | 0.8115 | 0.8299 |
0.5 | MAE | 3.0004 | 2.832 |
| MAPE | 16.182 | 15.2909 |
| RMSE | 3.8639 | 3.6970 |
| | 0.8111 | 0.8134 |
0.6 | MAE | 2.9986 | 2.8152 |
| MAPE | 16.1617 | 15.1992 |
| RMSE | 3.8679 | 3.6751 |
| | 0.8101 | 0.8333 |
0.7 | MAE | 3.0035 | 2.8016 |
| MAPE | 16.1789 | 15.1239 |
| RMSE | 3.8783 | 3.6591 |
Table 13.
Ranking analysis of WAE models; D: discrete rank, C: continuous rank, learning rate , number of estimators , maximum depth , , , , .
Table 13.
Ranking analysis of WAE models; D: discrete rank, C: continuous rank, learning rate , number of estimators , maximum depth , , , , .
WAE Model | Dataset | | MAE | MAPE | RMSE | Total Rank | Overall Rank |
---|
D/C | D/C | D/C | D/C | D/C | D/C |
---|
(1) | Training | 2/0.9925 | 3/0.9882 | 6/0.9779 | 2/0.9962 | 13/3.9548 | 21/7.9505 |
, , | Test | 1/1 | 1/1 | 5/0.9957 | 1/1 | 8/3.9957 |
(2) | Training | 3/0.9878 | 2/0.9923 | 2/0.9904 | 3/0.9940 | 10/3.9645 | 32/7.9222 |
, , | Test | 5/0.9796 | 6/0.9928 | 6/0.9953 | 5/0.9900 | 22/3.9577 |
(3) | Training | 1/1 | 1/1 | 1/1 | 1/1 | 4/4 | 25/7.9594 |
, , | Test | 6/0.9790 | 5/0.9934 | 4/0.9972 | 6/0.9898 | 21/3.9594 |
(4) | Training | 6/0.9691 | 5/0.9847 | 4/0.9858 | 6/0.9844 | 21/3.9240 | 29/7.8930 |
, , | Test | 3/0.9829 | 2/0.9945 | 1/1 | 2/0.9916 | 8/3.9690 |
(5) | Training | 4/0.9700 | 4/0.9852 | 3/0.9863 | 5/0.9849 | 16/3.9264 | 26/7.8949 |
, , | Test | 2/0.9829 | 3/0.9944 | 2/0.9998 | 3/0.9914 | 10/3.9685 |
(6) | Training | 5/0.9700 | 6/0.9835 | 5/0.9823 | 4/0.9850 | 20/3.9208 | 35/7.8861 |
, , | Test | 4/0.9823 | 4/0.9941 | 3/0.9975 | 4/0.9914 | 15/3.9653 |
Table 14.
Results for WAE models; learning rate , number of estimators , maximum depth , , , , .
Table 14.
Results for WAE models; learning rate , number of estimators , maximum depth , , , , .
Model | Measures | Mean of Test Sets | with Validation (Mean of 10 Models) |
---|
(2) | | 0.8225 | 0.8769 |
| MAE | 2.8720 | 2.4278 |
| MAPE | 15.7700 | 13.4316 |
| RMSE | 3.7509 | 3.1438 |
(4) | | 0.8201 | 0.8783 |
| MAE | 2.8768 | 2.4159 |
| MAPE | 15.8450 | 13.4064 |
| RMSE | 3.7570 | 3.1261 |
(5) | | 0.8219 | 0.8783 |
| MAE | 2.8764 | 2.4167 |
| MAPE | 15.8422 | 13.4105 |
| RMSE | 3.7564 | 3.1270 |
(6) | | 0.822 | 0.8782 |
| MAE | 2.8755 | 2.4135 |
| MAPE | 15.8056 | 13.3628 |
| RMSE | 3.7562 | 3.1272 |
Table 15.
Comparison of models based on statistical measures (mean of the 10 models); maximum depth, number of trees, proportion of samples for training each base estimator, dropout ratio, best model.
Table 15.
Comparison of models based on statistical measures (mean of the 10 models); maximum depth, number of trees, proportion of samples for training each base estimator, dropout ratio, best model.
Method | Parameters | | MAE | MAPE | RMSE |
---|
MLR [19] | - | 0.766 | 3.396 | 18.86 | 4.332 |
ANN [6] | 6 neurons | 0.825 | 2.897 | 15.85 | 3.745 |
RT [19] | CHAID | 0.820 | 2.928 | 16.22 | 3.808 |
KNN (1) | Manhattan distance, | 0.8038 | 3.0006 | 15.9617 | 3.9694 |
KNN (2) | Manhattan distance, | 0.7930 | 3.1044 | 16.6643 | 4.0775 |
MLP (1) | 30 neurons, L-BFGS, ReLU, | 0.8290 | 2.8415 | 15.4097 | 3.7050 |
MLP (2) | 40 neurons, L-BFGS, ReLU, | 0.8317 | 2.7956 | 15.0795 | 3.6765 |
RF (1) | , , | 0.8316 | 2.8207 | 15.7916 | 3.6784 |
RF (2) | , , | 0.8343 | 2.7955 | 15.6373 | 3.6486 |
RF (3) | , , | 0.8359 | 2.7808 | 15.5416 | 3.6303 |
GBR (1) | , , | 0.8762 | 2.4472 | 13.6766 | 3.1528 |
GBR (2) | , , | 0.8765 | 2.4396 | 13.6751 | 3.1500 |
GBR (3) | , , | 0.8732 | 2.4690 | 13.8063 | 3.1907 |
XGBoost (1) | , , , , , | 0.8772 | 2.4375 | 13.6244 | 3.1402 |
XGBoost (2) | , , , , , | 0.8773 | 2.4371 | 13.6238 | 3.1396 |
SVR (1) | , , | 0.8272 | 2.8520 | 15.4092 | 3.7252 |
SVR (2) | , , | 0.8299 | 2.8320 | 15.2909 | 3.6970 |
SVR (3) | , , | 0.8134 | 2.8152 | 15.1992 | 3.6751 |
SVR (4) | , , | 0.8333 | 2.8016 | 15.1239 | 3.6591 |
WAE (1) | , , , , | 0.8757 | 2.4259 | 13.3133 | 3.1598 |
WAE (2) | , , , , | 0.8769 | 2.4278 | 13.4316 | 3.1438 |
WAE (3) | , , , , | 0.8760 | 2.4415 | 13.5316 | 3.1557 |
WAE (4) | , , , , | 0.8783 | 2.4159 | 13.4064 | 3.1261 |
WAE (5) | , , , , | 0.8783 | 2.4167 | 13.4105 | 3.1270 |
WAE (6) | , , , , | 0.8782 | 2.4135 | 13.3628 | 3.1272 |
Table 16.
Comparison of GBR, XGBoost and WAE models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling (see
Table 15 for the description of the models).
Table 16.
Comparison of GBR, XGBoost and WAE models; 10 models randomly generated with a 75:25 ratio by means of simple random sampling (see
Table 15 for the description of the models).
Method | Dataset | St. Measures | Median | Mean | Confidence Interval (95%) |
---|
GBR (2) | Training | | 0.8952 | 0.8954 | [0.8934, 0.8974] |
| MAE | 2.2738 | 2.2792 | [2.2533, 2.3051] |
| MAPE | 12.7757 | 12.8251 | [12.6434, 13.0068] |
| RMSE | 2.9005 | 2.9009 | [2.8671, 2.9347] |
Test | | 0.8144 | 0.8182 | [0.8078, 0.8286] |
| MAE | 2.9413 | 2.9209 | [2.7964, 3.0454] |
| MAPE | 15.9138 | 16.2251 | [15.3890, 17.0612] |
| RMSE | 3.8291 | 3.7959 | [3.6467, 3.9441] |
All | | 0.8765 | 0.8765 | [0.8741, 0.8789] |
| MAE | 2.4403 | 2.4396 | [2.4197, 2.4595] |
| MAPE | 13.6819 | 13.6751 | [13.5575, 13.7927] |
| RMSE | 3.1504 | 3.1500 | [3.1185, 3.1815] |
XGBoost (2) | Training | | 0.8954 | 0.8960 | [0.8942, 0.8978] |
| MAE | 2.2936 | 2.2822 | [2.2524, 2.3120] |
| MAPE | 12.7911 | 12.8010 | [12.6456,12.9564] |
| RMSE | 2.8853 | 2.8926 | [2.8605,2.9247] |
Test | | 0.8160 | 0.8196 | [0.8111, 0.8281] |
| MAE | 2.8887 | 2.9020 | [2.7886, 3.0154] |
| MAPE | 15.9980 | 16.0921 | [15.3427, 16.8415] |
| RMSE | 3.8046 | 3.7812 | [3.6523, 3.9101] |
All | | 0.8773 | 0.8773 | [0.8754, 0.8792] |
| MAE | 2.4384 | 2.4371 | [2.4181, 2.4561] |
| MAPE | 13.6637 | 13.6238 | [13.4818, 13.7658] |
| RMSE | 3.1394 | 3.1396 | [3.1148, 3.1644] |
WAE (6) | Training | | 0.8967 | 0.8965 | [0.8950, 0.8980] |
| MAE | 2.2643 | 2.2595 | [2.2340, 2.2850] |
| MAPE | 12.5287 | 12.5486 | [12.3974, 12.6998] |
| RMSE | 2.8769 | 2.8857 | [2.8593, 2.9121] |
Test | | 0.8185 | 0.8220 | [0.8137, 0.8303] |
| MAE | 2.8447 | 2.8755 | [2.7697, 2.9813] |
| MAPE | 15.7647 | 15.8056 | [15.0947, 16.5165] |
| RMSE | 3.7892 | 3.7562 | [3.6275, 3.8849] |
All | | 0.8785 | 0.8782 | [0.8761, 0.8803] |
| MAE | 2.4112 | 2.4135 | [2.3987, 2.4283] |
| MAPE | 13.3920 | 13.3628 | [13.2487, 13.4769] |
| RMSE | 3.1240 | 3.1272 | [3.0997, 3.1547] |
Table 17.
Discrete ranking analysis of GBR, XGBoost and WAE models (see
Table 15 for the description of the models).
Table 17.
Discrete ranking analysis of GBR, XGBoost and WAE models (see
Table 15 for the description of the models).
Model | Dataset | | MAE | MAPE | RMSE | Total Rank | Overall Rank |
---|
GBR (2) | Training | 1 | 2 | 1 | 1 | 5 | 9 |
Test | 1 | 1 | 1 | 1 | 4 |
XGBoost (2) | Training | 2 | 1 | 2 | 2 | 7 | 15 |
Test | 2 | 2 | 2 | 2 | 8 |
WAE (6) | Training | 3 | 3 | 3 | 3 | 12 | 24 |
Test | 3 | 3 | 3 | 3 | 12 |