Next Article in Journal
A Spectrophotometry Evaluation of Clear Aligners Transparency: Comparison of 3D-Printers and Thermoforming Disks in Different Combinations
Previous Article in Journal
Effect of Elastomeric Bearing Stiffness on the Dynamic Response of Railway Bridges Considering Vehicle–Bridge Interaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Weighted Ensemble Method for Predicting Peak-Period Postal Logistics Volume: A South Korean Case Study

Postal & Logistics Technology Research Center, Electronics and Telecommunications Research Institute, Daejeon 34129, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 11962; https://doi.org/10.3390/app122311962
Submission received: 7 October 2022 / Revised: 18 November 2022 / Accepted: 19 November 2022 / Published: 23 November 2022
(This article belongs to the Topic Advanced Systems Engineering: Theory and Applications)

Abstract

:
Demand prediction for postal delivery services is useful for managing logistic operations optimally. Particularly for holiday periods, namely the Lunar New Year and Korean Thanksgiving Day (Chuseok) in South Korea, the logistics service increases sharply compared with the usual period, which makes it hard to provide reliable operation in mail centers. This study proposes a Multilayer Perceptron-based weighted ensemble method for predicting the accepted parcel volumes during special periods. The proposed method consists of two main phases: the first phase enriches the training dataset via synthetic samples using unsupervised learning; the second phase builds two Multilayer Perceptron models using internal and external factor-derived features for prediction. The final result is estimated by the weighted average predictions of these models. We conducted experiments on 25 Korean mail center datasets. The experimental study on the dataset provided by Korea Post shows better performance than other compared methods.

1. Introduction

With the rapid development of e-commerce, the demand for delivery services is growing fast. Especially in terms of long-term traditional holidays, including the Lunar New Year and Korean Thanksgiving Day in South Korea, e-commerce gets to its peak, and the delivery service rises dramatically. During these peak periods, logistic organizations have difficulty maintaining normal operations. The peak period is determined to be from the Monday two weeks before the start of a holiday, until two working days after the last day of the holiday period, and the average of the peak periods is 21 days. Demand prediction of parcel delivery service in special periods is critical in developing an optimal plan to provide efficient logistics services by avoiding factors such as insufficient logistics resources and labor shortages [1,2,3,4].
Accurate demand prediction helps to provide reliability to the delivery process, such as in accepting parcels from customers, sorting and transporting them to delivery stations, and delivering to the recipients on time [5]. There have been many approaches based on different perspectives proposed on logistics demand forecasting.
Statistical methods are widely used to learn economic factors affecting increased mail volume. The authors of [6] used the Vector Error Correction (VEC) model based on three economic factors, including Gross Domestic Product (GDP), telecommunication price index, and mail price index, to predict future mail demand. In [7], the elastic coefficient method was used to predict total logistics volume by considering the ratio between the growth rate of total logistics volume and GDP for 21 cities in Southeast Asia. Rogan et al. proposed a non-weighted symmetric Savitzky-Golay filter modification of a simple Seasonal Autoregressive Integrated Moving Average (SARIMA) model for forecasting the monthly volume of postal services in the Republic of Serbia and compared with the SARIMA model. The proposed method gave 30% less error than SARIMA [8]. The authors of [9] considered three models, such as Autoregressive Integrated Moving Average (ARIMA), the Holt-Winters decomposition, and Multiple Linear Regression (MLR), to forecast quarterly postal traffic in Portugal. Toshkollari et al. used Holt’s Exponential Smoothing model for predicting the yearly number of postal services in Albania. The model was built on data from 1993 to 2015 and evaluated by prediction of 2016 and 2017 [2].
Recently, machine learning and deep learning models have been becoming more popular and show better performance for predicting logistics demand [10,11,12,13]. Pu et al. proposed the Least Squares Support Vector Machine (LS-SVM) optimized by a genetic algorithm for forecasting logistics demand. They compared the optimized LS-SVM with simple LS-SVM and backpropagation (BP) neural network algorithms on a dataset from 1991 to 2003 in the China statistical yearbook [10]. Another enhanced SVM algorithm optimized by parameters of penalty and radial bases function based on an ant colony algorithm was proposed in [11]. They conducted an experiment using statistics on Qingdao’s logistics demand from 1999 to 2017. The improved SVM showed promising results to predict logistics demand. In [12], SARIMA and Long Short-Term Memory (LSTM) were evaluated by predicting the monthly volume of express mail services of international traffic in the Republic of Serbia. The LSTM model gave about 35% smaller Root Mean Square Error (RMSE) than the SARIMA model on 48 monthly observations. The authors of [13] used the LSTM model with two-dimensional input for predicting delivery demand based on a particular area. They evaluated the method on a simulated dataset with nine sub-regions, and the prediction accuracy reached 74.81% on the test dataset. The authors of [5] proposed an MLR-based method to predict the daily demand of parcel logistics. First, delivery stations were clustered by the Self-Organizing Map algorithm. Then MLR was developed for each cluster. Compared with the ARIMA and Random Forest (RF) algorithms, their proposed method showed more accurate results on the most clustered regions. Ebbesson investigated demand prediction methods, including regression analysis, RF, and neural network [3]. Huang et al. used a GM (1, 1) model and BP neural network with two hidden layers to predict logistics demand in Guangdong province from 2000 to 2019. As a result, BP neural network predicted better than the GM (1, 1) model [4].
In general, there are few studies on peak-period prediction for postal logistics, and previous studies addressed mid-to-long-term mail volume forecasting methods. However, studies of short-term mail volume prediction for postal logistics are needed to make short-term plans in terms of supporting the normal operation of logistics resources by providing fast detection of trend changes in mail volume. Demand prediction for short-term periods is also important for Korea Post, which provides next-day parcel delivery service, unlike ordinary mail, which can be delivered within three days from the date of acceptance. Previously, about a month or two before the peak periods, each mail center or logistics center established short-term parcel volume forecasts and resource operation plans based on past experiences, which resulted in insufficient logistics resources and staffing. The accurate prediction of peak period is one of the key factors for providing reliable services to the public.
Thus, this study considers the prediction of sharp changes in logistics services over a particular period in special holidays rather than the usual period. We have proposed a peak-period prediction method for parcel logistics to improve resource operation of sorting centers. The proposed method is established by a deep learning-based ensemble method, consisting of two Multilayer Perceptron (MLP) models combined by weights to improve performance. First, the postal parcel volume-based features are analyzed; next, several features are extracted by factors, including calendar, internal, and external. Second, the first MLP model is trained to predict the total parcel volume using the external features for a given period, while the second MLP model is developed using the internal features. The internal prediction model enhances the training dataset using the Variational Autoencoder (VAE) model to prevent performance degradation. In the end, the proposed ensemble model is constructed based on the combination of the internal and external MLP models with a weight regulation for peak-period prediction. The experimental study was conducted on parcel volume datasets from 25 mail centers in South Korea, and the proposed method shows the superiority in prediction performance compared with RF, Least Absolute Shrinkage and Selection Operator (LASSO), MLR, Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), and LSTM models.
The remaining part of the paper is organized as follows. Section 2 details the proposed ensemble method constructed by internal and external feature-based peak-period prediction models. The experimental study is described and discussed in Section 3. The conclusion of the presented study is in Section 4.

2. Proposed Method

The main purpose of this study is to predict the extreme volume change occurring in peak periods. The proposed method consists of two main phases. In phase 1, the features of the patterns of parcel volume are constructed. The internal and external factor-based features are derived. The bulk and contract mailing-based external features take account of uncertainty by a non-internal factor. The generated features are used as explanatory variables of the proposed prediction model. In phase 2, an MLP-based weighted ensemble method is developed. The proposed approach is designed for improving the performance of a single predictive model by combining two models based on the internal and external features with weight. The two MLP models are built on training datasets prepared differently. The internal features-based training datasets are enriched via synthetic samples using the unsupervised learning to prevent performance degradation. The proposed ensemble model is constructed based on the combination of the internal and external MLP models with the weighted average for peak-period prediction.

2.1. Feature Engineering

The data with the characteristic of repeatability in units of time include statistical self-similarity. Generally, it is possible to analyze the postal volume pattern depending on the characteristics repeated in units of time and calendar factors, such as day, weekend, holiday, and interval from holidays, to explore the statistical similarity.
In this study, the features of the patterns of parcel volume are categorized by internal and external factor-based derived features, as described in Figure 1 [14]. The internal features consist of calendar factors and volume similarity. From calendar and seasonal factors, we generate the features based on weekday, public holidays, and interval with holidays. The features of the time interval include the day before holidays, the day after holidays, and holidays that overlapped with workdays, n-th days before and after weekends or holidays interspersed with workdays for indicating the increasing volume trend near the particular periods. The volume factor-based derived features contain the past and recent volumes in normal and peak periods. The numerical features are extracted from internal historical data, including the moving average values of n-th previous weeks of a special period and mail volume compared to usual periods from an n-th week earlier. Moreover, the features based on external factors are created from the bulk mailing and contract volume. The prediction of bulk mailing volume tends to have a considerable variance depending on the contract customer’s business situation, whereby we extract explanatory features based on large volume mailing companies to enhance prediction performance. In the proposed weighted ensemble method, the generated internal volume-based features and external bulk mailing-based features are employed as input variables for the predictive models.

2.2. MLP-Based Weighted Ensemble Method

We propose an MLP-based demand prediction method for parcel logistics services. The proposed ensemble approach is designed for improving a single predictive model by combining two models based on the internal and external features with weight regulation, as shown in Figure 2. The two models are built on training datasets prepared differently, including Training Dataset-1 and Training Dataset-2. The final prediction result is estimated by averaging the outcome of each model based on the models’ weight. These models are built on training datasets prepared differently; Training Dataset-1 and Training Dataset-2, and the final prediction result is estimated by averaging the outcome of each model based on models’ weight.

2.2.1. Construction of Training Datasets

Our proposed method is to predict the parcel volume during special periods of sharp changes in logistics services. The proposed method constructs two MLP models based on the generated internal and external features. These models are learned from datasets prepared differently from the initial mail center datasets. The first predictive model (external features-based prediction model shown in Figure 2) is to predict the total parcel volume during special periods. Therefore, the first training dataset is produced by grouping daily information into summary rows belonging to the holiday. The Training Dataset-1 consists of the external features based on the large volume mailing and contract customer data, and the target variable is estimated by the sum of parcel volume during a peak period of the particular holidays.
For the second training dataset, instead of using the initial daily dataset directly, we prepare an enriched training dataset named Training Dataset-2 by adding the 500 synthetic mailing data generated from the VAE model. The VAE model is excluded on the first training dataset, which is a summarized representation of the daily dataset because the compressed dataset is produced from the original dataset and may not be able to properly represent the original data distribution. Therefore, the VAE model is applied to the second training dataset construction.
The VAE is a neural network introduced first by [15]. It is mainly used to generate synthetic data. For example, in [16], the VAE was used to generate synthetic electronic health records. The effectiveness of the VAE was confirmed by the comparison of LSTM models trained on the synthetic and actual datasets. The authors of [17] proposed a coronary heart disease risk prediction method based on neural networks. They improved the prediction performance by augmenting rare instances with the VAE-based synthetic data. In [18], the VAE was used for image data generation and evaluated on the MNIST dataset. The VAE-based approach outperformed other traditional data generation methods, such as Synthetic Minority Over-sampling Technique (SMOTE) and Adaptive Synthetic (ADASYN).
Figure 3 presents an architecture of the VAE model used in this study. Generally, the architecture of the VAE involves encoder and decoder parts. The encoder part encodes input data as a latent distribution in a lower-dimensional space and learns to return the mean and variance for the normal distribution of data. Then, the random point sampled from that distribution is decoded, and an error between decoded data and the initial data is calculated to adjust model weights. To generate synthetic data from the trained VAE, first, a random point z is estimated by Equation (1):
z = μ + σ × ε
where ε is randomly sampled from the standard normal distribution, and μ and σ are the mean and standard deviations of the latent distribution. The sampled point z is decoded to obtain new data.
Input and output layers of the VAE model in the proposed method consist of 30 nodes that represent input features for an internal factors-based model and its target variable. Each hidden layer with 15 nodes uses the ReLU activation function described in Equation (2):
ReLU ( x ) = { = x   i f   x > 0 0   i f   x 0 max { 0 , x }

2.2.2. Ensembling MLP Models

The proposed ensemble method is built using the generated internal and external features and the artificial neural network (ANN) technique; the ANN implemented in this study is MLP. The neural network was first introduced by Warren McCullough and Walter Pitts in 1943 [19] and successfully used in many fields, such as natural language processing [20], image processing [21], recommendation systems [22], and so on.
It involves input, hidden, and output layers. Neurons of the input layer represent input variables, and neurons of hidden and output layers receive the weighted summation of neurons in their previous layer and transform it using activation functions. It is trained by changing the weight of each neuron to minimize the difference between the target value and predicted output.
In this study, we propose the weighted ensemble method by constructing two MLP models based on the extracted internal and external features. EF-MLP is built on the large volume of mailing and contract customer data-based external features to predict the total volume of parcel services during peak periods. IF-MLP is trained on calendar and internal volume-derived features. The structure of EF-MLP consists of two hidden layers with eight and two neurons, while two hidden layers of IF-MLP have 58 and 29 neurons. The ReLU activation function is used in all hidden layers. The results of each model are combined using Equation (3):
y _ e n s i = ( 1 α ) × y i e f + α × y i i f  
where y _ e n s i is the ensembled predicted value; y i e f is the predicted value of the EF-MLP; y i i f is the predicted value of the IF-MLP; α ( 0 α 1 ) is the weight for IF-MLP; and   ( 1 α ) is the weight for EF-MLP.
An example of the prediction process of the proposed ensemble method is demonstrated in Figure 4.
In the prediction procedure, the predictive results of the EF-MLP and IF-MLP models are given as an input value of the ensemble model for peak-period prediction. For the final prediction, the outputs of the proposed EF-MLP and IF-MLP models are combined based on the weight value. The proposed weighted ensemble method with internal and external factor-based derived features can prevent the prediction output from being inappropriately biased and complement for weaknesses of individual EF-MLP and IF-MLP models.

3. Experimental Study

The proposed method is validated on 25 Korean mail center datasets. We have compared the proposed method with other predictive methods widely used in previous studies. Moreover, we compared several MLP models that were trained on the differently prepared datasets to demonstrate how the proposed method can enhance the performance of the compared MLP models. In addition, we have experimented by replacing the MLP models in the proposed method with other compared prediction models to show that algorithms used in the proposed method work well together. The prediction performance is evaluated using MAE, RMSE, MAPE and SMAPE.

3.1. Experimental Dataset

The compared and proposed methods are validated on postal volume data from the Korea Post. The domestic mail service in South Korea is generally classified into ordinary mail service and parcel mail service. For the ordinary mail, Korea Post, which is a government agency responsible for providing postal services, handles letters. For the parcel mail, Korea Post engages with several logistic companies to provide stable services. The postal logistics process generally consists of four stages of acceptance, sorting, transportation, and delivery. The mail and logistics centers of Korea Post sort parcel and ordinary mail, which are to be transported through the exchange center or directly to the inbound mail centers of respective destinations. 24 mail centers are evenly distributed across the country and one exchange center is located at the center of the nationwide network.
This study used parcel mail datasets of 25 mail sorting and logistics centers including the exchange center. Datasets were collected from 1 September 2015 to 6 October 2020. Figure 5 shows the trend of the postal parcel volume from January to December 2019 at Mail Center #3 as an example. We can see that the total volume of parcel delivery services rose dramatically during the Lunar New Year and Korean Thanksgiving Day, which indicated in the shaded area of Figure 5.
The peak period is determined to be from the Monday two weeks before the start of a holiday until two working days after the last day of the holiday period. For example, assuming that the Lunar New Year 2020 is between January 24 and 27, its special period can be defined to be from January 6 to January 29. Datasets of special periods in 25 mail centers are summarized in Table 1.

3.2. Evaluation Metrics

To evaluate the prediction performance of the compared models, Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Symmetric Mean Absolute Percentage Error (SMAPE), and Mean Absolute Percentage Error (MAPE) metrics are used.
MAE = 1 n i = 1 n | y ^ i y i |
RMSE = 1 n i = 1 n ( y i y ^ i ) 2
SMAPE = 1 n ( i = 1 n | y i y ^ i | ( y i + y ^ i ) / 2 ) × 100
MAPE = 1 n ( i = 1 n | y ^ i y i y i | ) × 100
where n is number of the sample, y ^ i is the i-th predicted value, and y i is the i-th actual value.

3.3. Compared Methods

We have compared the proposed method with other predictive methods using the Scikit-learn Python library, which contains implementations of machine learning and statistical modeling for classification, regression, and clustering [23]. The performance comparison was conducted with the commonly used techniques in previous studies.
Multiple Linear Regression (MLR) is commonly used for regression tasks [24]. It estimates coefficients that explain the correlation between the response and explanatory variables.
Least Absolute Shrinkage and Selection Operator (LASSO) is one kind of regularized linear regression. It minimizes the prediction error by shrinking the coefficients of some input variables that are irrelevant to the prediction task to zero [25].
Support Vector Regression (SVR) is a type of Support Vector Machine [26] used as a regression method. SVR works by using the ε-tube that best approximates the continuous-value function for balancing model complexity and prediction error [27]. We have built an SVR model with a C regularization parameter is equal to 0.1 and a linear kernel function.
Extreme Gradient Boosting (XGBoost) is an ensemble algorithm based on a decision tree and is formed on a gradient-boosting framework. It consists of multiple decision trees based on different subsets of features and combines their predictions to generate a final prediction. Each decision tree is built on the errors of the previous trees [28]. We have configured the learning rate to 0.3; the booster is gradient-boosting linear (gblinear), and the number of boosting iterations is 100.
Random Forest (RF) is widely used in classification and regression tasks. It builds several decision trees on dissimilar sub-samples from the training dataset. In the case of regression, the final result is generated by their average [29]. We set the number of trees to 500.
Long Short-Term Memory (LSTM) is one kind of Recurrent Neural Network (RNN) used in sequence processing [30]. It solves the problem of RNN that cannot predict accurately from the long-term information by extending the memory cells using input, forget and output gates. Those gates are employed to add useful information to the cell (input gate), remove unnecessary information from the cell (forget gate), and extract applicable information from the current cell (output gate). The LSTM model used in the experimental study had a single hidden layer with 16 neurons. The model was trained with Adam optimizer [31], batch size to 16, epochs to 100, and learning rate to 0.001.
Multilayer Perceptron (MLP) constructs multiple hidden layers in between input and output layers [19]. Each layer consists of artificial neurons connected with its following layers’ neurons by weight parameters. The weight parameters are optimized by the back-propagation algorithm. The training configuration of the MLP was Adam optimizer, batch size to 16, epochs to 100, and learning rate to 0.001.

3.4. Prediction Results

We compared seven prediction models, such as MLR, LASSO, SVR, RF, XGBoost, LSTM, and MLP with the proposed method on 25 mail center datasets to predict postal parcel volume during special periods. We selected the special periods for the training dataset from Korean Thanksgiving Day 2016 to the target peak period. For example, holidays that started from the Korean Thanksgiving Day 2016 to the Lunar New Year 2018 were used to train the model that predicts the 2018 Korean Thanksgiving Day. Figure 6 shows a visualization of the train-test split.
Foremost, we compared several MLP models that were trained on the differently prepared training datasets to demonstrate how the proposed method can enhance the performance of compared MLP models. Table 2 shows the prediction performance of compared MLP models for each mail center, and the best results are emphasized. The prediction results show that the proposed external and internal MLP model-based weighted ensemble method outperforms other MLP models on 13 of 25 mail center datasets.
From Table 2, the EF-MLP model performed better than the baseline MLP model using the initial dataset on most of the mail center datasets, and its average MAPE was less than the baseline MLP model by 4.98%. Moreover, the MAPE of the IF-MLP model was decreased by enriching the training dataset using synthetic data generated from the VAE model, and its average MAPE dropped to 10.77% compared with the MAPE of the baseline MLP model of 14.43%. Moreover, the average MAE, RMSE, SMAPE, and MAPE of the proposed model ensembling EF-MLP and IF-MLP models with the weight were less than the baseline MLP model by 25,917.1, 35,215.1, 4.87%, and 4.98%, respectively. Finally, the proposed method was superior to the baseline MLP model in giving the minimum errors. For each mail center, we selected the different weights for EF-MLP and IF-MLP based on the training performance of the proposed method.
Table 3 shows the performance comparison between the proposed method and the individual predictive methods. For 25 mail center datasets, ten centers have a higher coefficient of variation than the average, and the names of these mail centers are bolded in Table 3. The performance of our proposed method tends to be better for mail centers with large fluctuations. Among mail centers with a high coefficient of variation, the bolded highlight indicates that the proposed method is excellent. As shown from the MAPE, SMAPE, RMSE, and MAE results of Table 3, the proposed weighted ensemble method gave better performance than other individual compared models for most mail centers. The proposed method achieved better accuracy in MAPE and SMAPE than the MLR-based, LASSO-based, SVR-based, XGBoost-based, RF-based, and LSTM-based methods. It dropped MAPE and SMAPE values from MLR, LASSO, SVR, XGBoost, RF and LSTM-based methods by 3.9%, 3.9%, 4.5%, 4.2%, 4.3%, 3.2% and 4.1%, 3.8%, 4.2%, 3.7%, 4.2%, 3.4%, respectively. For the MAE and RMSE results, the proposed ensemble method outperformed the compared methods by giving 83,012.5 of MAE and 147,883.2 of RMSE on average. Its MAE value is less than the compared methods by 20,158.7, 17,994.5, 21,858.7, 18,269.8, 18,887.3, 15,087.2, and RMSE is lower by 29,240.3, 26,028.4, 29,411.1, 31,937.6, 17,351.2, 20,666.6, and 42,155.1, respectively.
Figure 7 represents the average performance of the compared methods on the 25 mail center datasets. Compared to ensemble methods constructed by the internal and external features-based compared algorithms, the ensemble model of the proposed method showed more accurate performance in MAPE by 25.2% up to 34.5%. For the MAE results, our prediction scheme for peak periods achieved improved accuracy by 15.4% up to 23.8% compared to the other methods. For the SMAPE and RMSE results, the proposed ensemble method outperformed the compared methods by reducing 24.0% up to 30.8% of SMAPE and by 10.5% up to 22.2% of RMSE, respectively.
Figure 8 shows the prediction performance of the proposed ensemble method using the enriching dataset based on VAE. We have experimented by replacing the MLP models in the proposed method with other compared prediction algorithms to show that algorithms used in the proposed method work well together. From Figure 8, we can see that the VAE-based data enrichment improved the average MAPE of all versions of weighted ensemble methods successfully. Moreover, the proposed MLPs-based weighted ensemble method learned from the enriched training dataset outperformed all weighted methods based on the MLR, LASSO, SVR, XGBoost, RF, and LSTM. In particular, our prediction scheme for peak periods demonstrated better performance compared with other predictive models by 21.9% on average, as shown in Figure 8.
As a result of the experimental study on Korean postal parcel datasets, the proposed method improved the prediction performance in terms of the MAPE reduction rate up to 59.6% compared to other methods during peak periods, such as the Lunar New Year and Korean Thanksgiving Day, when demand for logistics services increases sharply.

4. Conclusions

It is important to predict peak-period demand accurately to optimize the resource and operation in logistics industries. Korea Post, the national postal service provider, needs to predict short-term changes in parcel volume in order to optimize its operations, especially during periods of sharp changes in parcel volume.
This study proposed the prediction method for the demand of logistics services during special periods in holidays using deep learning models. The proposed method improved the prediction performance by two steps. In the first step, the training dataset was enriched by synthetic data generated from the VAE model. It decreased the average MAPE of the baseline MLP model that was learned from the daily training datasets of 25 mail centers by 3.7% (see the results of baseline MLP and IF-MLP in Table 2). In the second step, the proposed method combined two MLP models that were trained on the enriched daily training dataset using calendar and internal volume-derived features and the dataset using external features of the bulk mailing volume and contract customer data. The final prediction result was estimated by the weighted average of the outputs of these EF-MLP and IF-MLP models. The weighted ensemble of EF-MLP and IF-MLP models reduced the MAPE of the baseline MLP models by 5.0% successfully.
The experimental results showed how the proposed method improved the prediction performance step by step and compared the forecasting results of the proposed method with machine learning-based models on 25 mail center datasets. The proposed method outperformed the compared models on most datasets and achieved a performance improvement of up to 59.6%. The experimental results confirm that the proposed weighted ensemble model is acceptable for peak-period prediction, and it is highly possible to expand the range of its applications.

Author Contributions

Conceptualization, E.K.; methodology, E.K. and T.A.; software, E.K. and T.A.; validation, E.K., T.A. and H.J.; resources, E.K.; data curation, E.K. and T.A.; writing original draft preparation, E.K. and T.A.; supervision, E.K. and H.J.; funding acquisition, E.K. and H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-01664, Development of Logistics Infrastructure Technology for Postal Service).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fève, F.; Floren, J.P.; Rodriguez, F.; Soteri, S. Forecasting mail volumes in an evolving market environment. In Heightening Competition in the Postal and Delivery Sector, 1st ed.; Edward Elgar: Northampton, MA, USA, 2010; pp. 116–134. [Google Scholar] [CrossRef] [Green Version]
  2. Toshkollari, O.; Sota, F.; Shtino, V.; Bozhiqi, K. Analysis and prediction of city postal services in Albania. A case study. In Proceedings of the UBT International Conference, Durres, Albania, 28–30 October 2016. [Google Scholar] [CrossRef] [Green Version]
  3. Ebbesson, M. Mail Volume Forecasting an Evaluation of Machine Learning Models. Master’s Thesis, Uppsala University, Uppsala, Sweden, 2016. [Google Scholar]
  4. Huang, L.; Xie, G.; Zhao, W.; Gu, Y.; Huang, Y. Regional logistics demand forecasting: A BP neural network approach. Complex Intell. Syst. 2021, 7, 1–16. [Google Scholar] [CrossRef]
  5. Kim, E.; Jung, H. Feature Engineering-based Short-Term Prediction Model for Postal Parcel Logistics. In Proceedings of the 6th International Conference on Machine Learning Technologies, Jeju Island, Republic of Korea, 23–25 April 2021. [Google Scholar] [CrossRef]
  6. Trinkner, U.; Grossmann, M. Forecasting Swiss mail demand. In Progress Toward Liberalization of the Postal and Delivery Sector, 1st ed.; Springer: Boston, MA, USA, 2006; Volume 49, pp. 267–280. [Google Scholar] [CrossRef]
  7. Nguyen, T.Y. Research on logistics demand forecast in southeast Asia. World J. Eng. Technol. 2020, 8, 249–256. [Google Scholar] [CrossRef]
  8. Rogan, I.D.; Pronić-Rančić, O.R. Forecasting the volume of postal services using Savitzky-Golay filter modification. In Proceedings of the 56th International Scientific Conference on Information, Communication and Energy Systems and Technologies, Sozopol, Bulgaria, 16–18 June 2021. [Google Scholar] [CrossRef]
  9. Machado, C.; Silva, F. Postal Traffic in Portugal: Applying Time Series Modeling. In The Changing Postal Environment, 1st ed.; Springer: Cham, Switzerland, 2020; pp. 197–212. [Google Scholar] [CrossRef]
  10. Pu, Z.; Yang, L.; Guo, Z.G. Applied Research on Logistics Demand Prediction Based on Support Vector Machine of Genetic Algorithm. In Proceedings of the International Conference on Computational and Information Sciences, Chengdu, China, 21–23 October 2011. [Google Scholar] [CrossRef]
  11. Yu, N.; Xu, W.; Yu, K.L. Research on regional logistics demand forecast based on improved support vector machine: A case study of Qingdao city under the New Free Trade Zone Strategy. IEEE Access 2020, 8, 9551–9564. [Google Scholar] [CrossRef]
  12. Rogan, I.D.; Pronić-Rančić, O.R. SARIMA and ANN Approaches in Forecasting the Volume of Postal Services. In Proceedings of the 15th International Conference on Advanced Technologies, Systems and Services in Telecommunications, Nis, Serbia, 20–22 October 2021. [Google Scholar] [CrossRef]
  13. Lin, Y.S.; Zhang, Y.; Lin, I.C.; Chang, C.J. Predicting logistics delivery demand with deep neural networks. In Proceedings of the 7th International Conference on Industrial Technology and Management, Oxford, UK, 7–9 March 2018. [Google Scholar] [CrossRef]
  14. Kim, E.; Jung, H. Extreme Event Forecasting for Postal Logistics Service. In Proceedings of the 8th International Conference on Management of e-Commerce and e-Government, Jeju, Republic of Korea, 4–6 July 2021. [Google Scholar] [CrossRef]
  15. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
  16. Biswal, S.; Ghosh, S.; Duke, J.; Malin, B.; Stewart, W.; Sun, J. EVA: Generating Longitudinal Electronic Health Records Using Conditional Variational Autoencoders. arXiv 2020, arXiv:2012.10020. [Google Scholar]
  17. Amarbayasgalan, T.; Pham, V.H.; Theera-Umpon, N.; Piao, Y.; Ryu, K.H. An efficient prediction method for coronary heart disease risk based on two deep neural networks trained on well-ordered training datasets. IEEE Access 2021, 9, 135210–135223. [Google Scholar] [CrossRef]
  18. Wan, Z.; Zhang, Y.; He, H. Variational autoencoder based synthetic data generation for imbalanced learning. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI) 2017, Honolulu, HI, USA, 27 November 2017–1 December 2017. [Google Scholar] [CrossRef]
  19. Jain, A.K.; Mao, J.; Mohiuddin, K.M. Artificial neural networks: A tutorial. Computer 1996, 29, 31–44. [Google Scholar] [CrossRef] [Green Version]
  20. Otter, D.W.; Medina, J.R.; Kalita, J.K. A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 604–624. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Zhang, S.; Yao, L.; Sun, A.; Tay, Y. Deep learning based recommender system: A survey and new perspectives. ACM Comput. Sur. 2019, 52, 1–38. [Google Scholar] [CrossRef] [Green Version]
  23. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  24. Kleinbaum, D.G.; Kupper, L.L.; Muller, K.E.; Nizam, A. Applied Regression Analysis and Other Multivariable Methods, 4th ed.; Cengage Learning: Boston, MA, USA, 2007. [Google Scholar]
  25. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  26. Vapnik, V. The Nature of Statistical Learning Theory, 2nd ed.; Springer: Berlin, Germany, 2013. [Google Scholar]
  27. Awad, M.; Khanna, R. Support vector regression. In Efficient Learning Machines, 1st ed.; Apress: Berkeley, CA, USA, 2015; pp. 67–80. [Google Scholar]
  28. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar] [CrossRef] [Green Version]
  29. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  30. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  31. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. Feature-set by internal and external factors.
Figure 1. Feature-set by internal and external factors.
Applsci 12 11962 g001
Figure 2. Overall architecture of the proposed method. Dashed lines indicate prediction process; solid lines indicate training procedure.
Figure 2. Overall architecture of the proposed method. Dashed lines indicate prediction process; solid lines indicate training procedure.
Applsci 12 11962 g002
Figure 3. Architecture of the VAE model.
Figure 3. Architecture of the VAE model.
Applsci 12 11962 g003
Figure 4. Example of the prediction procedure based on two MLP models; APE: Absolute Percentage Error.
Figure 4. Example of the prediction procedure based on two MLP models; APE: Absolute Percentage Error.
Applsci 12 11962 g004
Figure 5. Example of postal parcel data of a mail center.
Figure 5. Example of postal parcel data of a mail center.
Applsci 12 11962 g005
Figure 6. Train-test split; LNY: Lunar New Year, KTD: Korean Thanksgiving Day.
Figure 6. Train-test split; LNY: Lunar New Year, KTD: Korean Thanksgiving Day.
Applsci 12 11962 g006
Figure 7. Average performance of compared methods on 25 mail center datasets.
Figure 7. Average performance of compared methods on 25 mail center datasets.
Applsci 12 11962 g007
Figure 8. Comparison of MAPE results of weighted ensemble methods based on initial and VAE-based enriched datasets.
Figure 8. Comparison of MAPE results of weighted ensemble methods based on initial and VAE-based enriched datasets.
Applsci 12 11962 g008
Table 1. Summary of datasets of peak periods in 25 mail centers.
Table 1. Summary of datasets of peak periods in 25 mail centers.
Mail CenterQ1Q2Q3Q4MaxAvgStdev95% CICV
Mail Center #152,842.5120,541.0149,959.0225,370.0225,370104,943.763,352.795,096.4–114,790.90.60
Mail Center #22180.020,477.029,992.079,048.079,04823,705.517,073.476,377.3–81,718.70.72
Mail Center #39889.028,064.038,370.077,927.077,92726,459.016,882.723,814.6–29,103.40.64
Mail Center #427,063.047,586.058,418.0139,060.0139,06045,200.826,463.440,973.9–49,427.70.59
Mail Center #524,075.541,184.551,461.394,285.094,28537,884.122,435.934,288.6–41,479.60.59
Mail Center #66267.310,545.015,723.529,372.029,37210,982.26737.38405.9–13,558.50.61
Mail Center #725,117.564,168.084,739.5135,408.0135,40857,270.033,229.352,297.9–62,242.10.58
Mail Center #810,982.024,748.034,520.071,813.071,81323,914.415,760.319,387.5–28,441.30.66
Mail Center #9 27,461.558,678.075,437.5121,475.0121,47551,902.130,383.547,127.7–56,676.50.59
Mail Center #1025,940.045,000.058,127.594,897.094,89741,943.524,780.338,011.5–45,875.50.59
Mail Center #1137,859.070,448.085,492.8138,841.0138,84163,427.335,386.053,950.9–72,903.70.56
Mail Center #1240,217.089,316.0119,860.8237,143.0237,14382,019.751,102.673,989.6–90,049.80.62
Mail Center #1342,016.869,176.595,636.8156,706.0156,70667,975.551,102.661,594.2–74,356.80.75
Mail Center #1416,935.530,181.546,781.374,552.074,55231,479.320,162.728,226.2–34,732.40.64
Mail Center #1512,599.819,316.028,208.048,345.048,34519,804.811,208.117,945.4–21,664.20.57
Mail Center #1610,828.041,718.062,952.0117,847.0117,84740,518.928,092.736,146.4–44,891.40.69
Mail Center #1724,916.368,523.098,665.5176,951.0176,95167,183.742,668.360,521.6–73,845.80.64
Mail Center #1810,243.315,970.522,504.848,978.048,97816,187.610,150.517,945.4–21,664.20.63
Mail Center #198630.018,777.033,069.070,283.070,28321,526.015,927.718,911.5–24,140.50.74
Mail Center #2011,396.821,525.029,665.854,976.054,97620,651.312,261.118,699.4–22,603.20.59
Mail Center #2111,008.020,151.030,727.069,885.069,88521,959.815,357.419,473.6–24,446.00.70
Mail Center #2225,305.070,339.085,875.0151,727.0151,72762,182.438,329.019,473.6–24,446.00.62
Mail Center #2313,141.524,884.546,106.089,609.089,60929,871.522,499.326,265.8–33,477.20.75
Mail Center #247191.814,144.020,541.043,956.043,95614,001.79061.112,529.8–15,473.60.65
Mail Center #2540,091.591,905.0147,942.5222,904.0222,90493,499.462,742.883,794.9–03,203.90.67
Q1–Q4: from the first quartile to the fourth quartile; Max: maximum value; Avg: average value; Stdev: standard deviation; CI: confidence interval; CV: coefficient of variation.
Table 2. Performance of MLP models by the proposed method on the 25 mail center datasets.
Table 2. Performance of MLP models by the proposed method on the 25 mail center datasets.
Mail CentersMLP (Baseline)EF-MLPIF-MLPProposed
MAPEMAEMAPEMAEMAPEMAEMAPEMAE
SMAPERMSESMAPERMSESMAPERMSESMAPERMSE
Mail Center #18.245166,430.09.237201,634.55.923128,320.76.076131,779.3
8.679187,880.710.055274,179.06.211166,883.66.386172,273.1
Mail Center #224.860129,159.623.368121,265.925.401132,437.624.108269,291.7
63.978454,463.672.296432,934.863.379421,951.162.094394,716.5
Mail Center #312.12255,555.210.70553,918.77.52834,675.210.09050,429.3
11.68958,175.811.08568,097.47.26245,449.910.34862,254.3
Mail Center #410.86484,690.711.84777,226.47.50364,445.38.93156,224.0
10.83898,681.010.530110,427.57.84488,154.48.06085,958.8
Mail Center #512.21285,184.98.20960,226.88.67258,807.07.86854,813.0
13.12196,738.08.86581,912.79.20167,044.58.28760,962.6
Mail Center #613.93126,250.58.82716,598.38.06014,978.93.2256180.1
15.18628,240.18.57120,286.18.68319,855.13.2876711.9
Mail Center #715.890184,636.97.939110,440.112.971157,518.57.661101,561.9
15.703203,238.78.842189,087.512.821185,911.88.362167,814.1
Mail Center #812.95953,347.19.08439,366.615.95163,867.68.42436,017.7
12.98265,448.99.66947,857.714.37576,470.08.79542,160.4
Mail Center #9 8.94488,769.812.547115,290.76.32463,180.811.263104,454.8
9.10994,215.512.226145,732.46.76389,717.911.098122,761.0
Mail Center #10 8.12666,934.412.113106,297.92.66322,661.27.38264,424.7
8.32270,176.013.241128,728.42.73834,506.17.78878,741.3
Mail Center #118.17393,854.93.63541,706.95.98066,280.83.17135,222.8
7.895109,198.83.60948,269.85.91973,963.23.10548,512.5
Mail Center #1212.548213,895.215.592291,748.612.500216,383.213.304231,382.9
13.530239,177.617.471353,703.213.598262,091.114.478271,213.8
Mail Center #1311.340132,817.09.589120,317.410.735134,590.58.539106,148.2
10.696182,117.99.891144,527.310.522155,959.68.692122,585.5
Mail Center #1411.61768,646.87.01044,892.110.77264,262.86.92644,347.8
12.07786,639.67.66877,396.315.11889,138.07.58076,730.7
Mail Center #159.66530,503.04.04212,698.17.79824,605.13.73311,624.5
9.51933,665.83.96513,690.67.86827,276.83.66912,610.5
Mail Center #168.13563,139.36.86652,967.84.18532,533.23.39826,809.2
7.80172,776.76.96461,500.64.35943,777.23.47831,514.6
Mail Center #1718.482210,939.710.730113,781.411.529126,172.210.083105,712.6
18.849232,308.39.736150,959.210.877156,001.19.175145,372.9
Mail Center #1829.40074,301.116.12939,864.514.35536,356.811.23527,723.7
24.26296,851.913.85157,401.914.86741,707.710.19737,596.3
Mail Center #1915.22252,095.24.64815,539.011.37938,471.74.54515,262.0
13.96474,066.84.46122,703.410.73948,963.54.31924,065.9
Mail Center #206.37523,599.98.15234,187.45.29119,475.25.24919,756.7
6.31933,965.68.95753,419.35.19825,839.95.20223,838.8
Mail Center #2116.66054,886.049.398169,356.811.77139,135.713.20344,925.5
16.03061,489.435.081246,618.611.41849,462.312.22752,351.9
Mail Center #2214.900183,708.79.093106,237.111.342133,792.710.739126,154.0
16.069221,819.38.774130,242.211.524136,342.510.790130,241.2
Mail Center #2329.869146,433.421.77799,925.717.24377,933.315.27669,954.8
30.531179,673.120.077119,480.915.57594,092.713.88291,474.5
Mail Center #2412.18630,362.317.64947,736.310.31427,247.010.68828,141.9
13.26534,953.720.05256,824.110.89330,520.611.51832,974.8
Mail Center #2528.113403,099.39.804145,090.023.176333,733.921.303306,970.0
22.685539,578.59.232186,659.019.928407,434.718.459379,725.5
Table 3. Comparison of MAPE, SMAPE, RMSE and MAE results.
Table 3. Comparison of MAPE, SMAPE, RMSE and MAE results.
Mail CentersMLRLASSOSVRXGBoostRFLSTMProposed
MAPEMAEMAPEMAEMAPEMAEMAPEMAEMAPEMAEMAPEMAEMAPEMAE
SMAPERMSESMAPERMSESMAPERMSESMAPERMSESMAPERMSESMAPERMSESMAPERMSE
Mail Center #18.700176,927.28.800183,267.98.300168,968.47.000143,238.65.960120,928.29.581198,052.78.200166,430.0
9.145196,286.19.189197,004.68.583185,988.87.259165,407.96.064129,216.510.096216,752.18.679187,880.7
Mail Center #225.567133,309.223.104120,387.024.855125,101.121.061109,000.729.904154,773.023.854120,888.324.860129,159.6
63.978454,463.661.039436,652.962.794404,836.758.927432,818.168.399338,071.062.385457,488.263.129470,150.7
Mail Center #313.21059,803.58.62237,953.79.63244,434.88.35737,345.615.50773,843.511.80655,294.112.12255,555.2
12.24870,960.97.85958,173.29.50754,156.07.70153,794.715.20085,141.411.57260,555.811.68958,175.8
Mail Center #48.71855,153.84.74535,652.113.29296,957.76.31346,961.08.49861,704.611.70486,176.110.86484,690.7
8.32171,366.84.78038,827.414.301110,275.66.45653,920.58.42864,974.412.63398,993.610.83898,681.0
Mail Center #59.19465,580.111.13080,354.212.60688,449.811.61684,341.611.83381,396.010.64071,789.612.21285,184.9
9.78377,081.511.38484,720.112.80893,646.011.52387,613.712.70690,280.711.58586,332.013.12196,738.0
Mail Center #68.67016,416.910.35719,485.016.50931,091.211.46921,543.67.84114,572.314.30727,009.913.93126,250.5
9.08316,707.011.06321,319.818.43834,405.912.40624,283.38.50120,282.915.65729,265.215.18628,240.1
Mail Center #715.614194,105.615.764193,784.914.380172,578.114.595178,457.517.170206,847.016.436200,644.415.890184,636.9
16.107230,523.016.147223,426.714.253201,953.414.725203,967.517.928239,378.516.463237,349.515.703203,238.7
Mail Center #812.15047,857.814.68558,537.614.17258,134.216.38566,464.39.75439,491.38.21633,513.912.95953,347.1
11.68057,534.713.69767,064.512.93469,725.414.56582,339.29.77845,368.28.01438,377.612.98265,448.9
Mail Center #9 15.475154,923.613.724139,705.011.173112,467.813.898139,276.610.04496,345.69.10694,373.08.94488,769.8
16.323172,577.813.677154,132.811.662137,145.213.838155,917.510.461107,908.69.363107,343.39.10994,215.5
Mail Center #10 9.71884,693.57.86566,159.73.73431,769.36.46751,486.49.53276,688.46.44758,445.78.12666,934.4
10.324106,016.57.80474,191.23.76841,944.26.38159,493.810.09086,767.46.80179,742.18.32270,176.0
Mail Center #117.68486,461.56.94678,909.510.479120,436.28.42897,234.58.20794,408.14.79652,858.08.17393,854.9
7.62489,752.86.77784,003.910.117129,083.68.114110,128.68.001113,977.84.79265,940.27.895109,198.8
Mail Center #1210.909198,019.911.528185,539.011.335198,073.98.060150,602.616.509274,021.515.502262,408.412.548213,895.2
11.772246,448.912.531218,863.712.257245,098.68.656215,053.718.237301,877.616.933282,917.513.530239,177.6
Mail Center #1310.859129,566.86.43474,832.89.337113,476.09.281114,457.07.29685,524.08.08098,972.211.340132,817.0
11.120138,456.26.33599,743.89.057133,681.68.935136,944.47.062107,624.97.967120,164.510.696182,117.9
Mail Center #1416.93296,731.018.979107,594.119.794110,870.120.317114,463.111.58866,490.613.83878,714.711.61768,646.8
16.959118,159.918.727127,884.718.542146,544.519.714137,958.411.37084,755.513.50299,496.812.07786,639.6
Mail Center #158.75427,941.411.13435,581.813.54843,346.514.94347,741.215.31548,570.311.34835,829.29.66530,503.0
8.69032,400.611.09241,586.613.48452,510.214.89554,839.414.68554,990.410.83440,745.59.51933,665.8
Mail Center #163.69828,663.55.46443,052.06.49251,490.96.49051,788.56.34947,643.97.29756,151.78.13563,139.3
3.81636,871.15.56146,735.16.42755,291.66.30561,412.27.07583,335.87.35561,467.27.80172,776.7
Mail Center #1714.733158,789.716.539183,795.116.429186,916.614.200161,566.719.331161,566.716.563183,237.218.482210,939.7
13.463196,929.115.824220,976.316.195207,466.213.491190,206.818.517240,885.515.926204,504.118.849232,308.3
Mail Center #1820.43551,450.721.99655,775.925.21564,258.019.34649,369.615.84140,561.321.87355,301.329.40074,301.1
28.19279,467.226.36870,379.022.70369,386.818.47356,735.114.86747,297.225.58068,735.824.26296,851.9
Mail Center #1915.01951,420.015.84854,138.414.90950,633.018.48962,803.512.91644,291.313.16644,567.915.22252,095.2
13.91165,230.514.36873,624.513.55869,627.716.08287,969.512.07656,959.112.23056,548.113.96474,066.8
Mail Center #2011.54745,280.711.54045,856.910.17740,072.311.49445,175.67.62728,476.77.11526,853.66.37523,599.9
11.30054,913.210.86157,032.39.89247,984.010.96555,129.57.34437,455.56.96732,726.66.31933,965.6
Mail Center #2116.68160,290.416.35157,179.813.08044,519.416.82361,666.824.83492,035.012.51739,110.716.66054,886.0
14.94772,309.015.39967,283.713.02149,189.515.43570,936.824.361115,086.211.94250,038.016.03061,489.4
Mail Center #2213.315156,021.012.662145,766.213.516157,752.113.456156,104.29.676112,407.912.311144,273.114.900183,708.7
13.954176,387.512.670173,683.713.815171,023.613.699174,139.79.571124,694.212.691161,564.816.069221,819.3
Mail Center #2320.36793,087.621.40498,199.617.03878,316.020.24291,943.124.992111,352.317.99580,754.629.869146,433.4
18.357105,324.319.417105,831.915.79589,847.618.188102,284.221.546130,080.316.29195,857.830.531179,673.1
Mail Center #249.95726,785.111.38830,247.610.29427,316.712.45133,483.710.56926,705.39.11623,761.712.18630,362.3
10.26130,556.211.71632,659.010.58829,210.412.60635,745.911.10327,769.19.72128,603.813.26534,953.7
Mail Center #2526.521380,001.127.421393,419.528.329404,349.929.513415,541.126.655386,851.222.264323,511.728.113403,099.3
22.250469,426.922.883485,654.422.995527,813.423.557550,621.223.253456,895.519.459391,006.822.685539,578.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, E.; Amarbayasgalan, T.; Jung, H. Efficient Weighted Ensemble Method for Predicting Peak-Period Postal Logistics Volume: A South Korean Case Study. Appl. Sci. 2022, 12, 11962. https://doi.org/10.3390/app122311962

AMA Style

Kim E, Amarbayasgalan T, Jung H. Efficient Weighted Ensemble Method for Predicting Peak-Period Postal Logistics Volume: A South Korean Case Study. Applied Sciences. 2022; 12(23):11962. https://doi.org/10.3390/app122311962

Chicago/Turabian Style

Kim, Eunhye, Tsatsral Amarbayasgalan, and Hoon Jung. 2022. "Efficient Weighted Ensemble Method for Predicting Peak-Period Postal Logistics Volume: A South Korean Case Study" Applied Sciences 12, no. 23: 11962. https://doi.org/10.3390/app122311962

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop