Next Article in Journal
Unleashing the Potentials of Quantum Probability Theory for Customer Experience Analytics
Previous Article in Journal
SmartMinutes—A Blockchain-Based Framework for Automated, Reliable, and Transparent Meeting Minutes Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Case Report

Comparative Study of Mortality Rate Prediction Using Data-Driven Recurrent Neural Networks and the Lee–Carter Model

Department of Mathematical Sciences, Middle Tennessee State University, Murfreesboro, TN 37132-0001, USA
*
Authors to whom correspondence should be addressed.
Big Data Cogn. Comput. 2022, 6(4), 134; https://doi.org/10.3390/bdcc6040134
Submission received: 14 October 2022 / Revised: 1 November 2022 / Accepted: 2 November 2022 / Published: 10 November 2022
(This article belongs to the Topic Machine and Deep Learning)

Abstract

:
The Lee–Carter model could be considered as one of the most important mortality prediction models among stochastic models in the field of mortality. With the recent developments of machine learning and deep learning, many studies have applied deep learning approaches to time series mortality rate predictions, but most of them only focus on a comparison between the Long Short-Term Memory and the traditional models. In this study, three different recurrent neural networks, Long Short-Term Memory, Bidirectional Long Short-Term Memory, and Gated Recurrent Unit, are proposed for the task of mortality rate prediction. Different from the standard country level mortality rate comparison, this study compares the three deep learning models and the classic Lee–Carter model on nine divisions’ yearly mortality data by gender from 1966 to 2015 in the United States. With the out-of-sample testing, we found that the Gated Recurrent Unit model showed better average MAE and RMSE values than the Lee–Carter model on 72.2% (13/18) and 67.7% (12/18) of the database, respectively, while the same measure for the Long Short-Term Memory model and Bidirectional Long Short-Term Memory model are 50%/38.9% (MAE/RMSE) and 61.1%/61.1% (MAE/RMSE), respectively. If we consider forecasting accuracy, computing expense, and interpretability, the Lee–Carter model with ARIMA exhibits the best overall performance, but the recurrent neural networks could also be good candidates for mortality forecasting for divisions in the United States.

1. Literature Review

Modeling and forecasting future mortality rates are some of the most significant problems for life insurance, demography, and other social sciences. Many countries have experienced a rapid increase in life expectancy in recent decades (following the Second World War), and this increase has augmented the difficulties in modeling and predicting future mortality. Several stochastic mortality models have been proposed, for example, the famous Lee–Carter model (LC) by Lee and Carter [1]. Because of its simplicity, interpretability and, of course, convenience, the LC model has become the most frequently used stochastic model. Many improvements to the LC model have been proposed, including the Poisson extension of the LC model by Brouhns et al. [2], a functional data method using penalized regression by Hyndman and Ullah [3], a static PCA extension of the LC model by Shang [4], and a Two-Factor model for stochastic mortality with parameter uncertainty by Cairns et al. [5], also known as the Cairns–Blake–Dowd model (CBD model). Other studies related to the improvement of the Lee–Carter model can be found in the field of new techniques application, such as a cohort-based extension to the Lee–Carter model by Renshaw and Haberman [6] and the application of the random forest algorithm to improve the Lee–Carter mortality forecasting by Deprez et al. [7] and Levantesi and Pizzorusso [8].
Recently, with the development of machine learning, deep learning, and big data, new opportunities and challenges have been introduced into the actuarial field [9,10]. At present, recurrent neural networks stand prominently on the stage of mortality forecasting. Recurrent neural networks are useful in recognizing unidentifiable patterns from a large dataset with many features. However, the application of neural networks in the mortality rate forecasting field have not offered as much insight as we expected. The biggest “problem” of neural networks is outcome uncertainty and the lack of demographic meaning. Moreover, instead of being explained by a specific hypothesis, neural networks are driven by data, which limits references to current works. However, many researchers still seek to apply neural networks for mortality forecasting. A neural network was proposed to predict and simulate the log-mortality rate by Hainaut [11]. This study showed that the neural network could catch more information from known mortality data and then duplicate the nonlinear trend in the prediction. Perla et al. [12] proposed a comparison study between the Lee–Carter model and deep learning models with data from Human Mortality Database (HMD) [13]. Some of the studies focus on applying deep learning models to predict the time index in the Lee–Carter model. Nigri et al. [14,15] applied the recurrent networks with Long Short-Term Memory (LSTM) architecture to predict the time index of the Lee–Carter model and found that neural networks forecasting could provide a high accuracy trend compared to the ARIMA models in several countries and for both genders. Marino and Levantesi [16] extended the neural networks approach by Nigri et al. [14] and derived the related confidence interval representing the Long Short-Term Memory model’s parameter uncertainty. Richman and Wuthrich [17] compared the forecasting performance of a neural network extension of the Lee–Carter model and several different Lee–Carter approaches on all the countries in the HMD [13]. Other relevant applications of machine learning and deep learning studies in the mortality field can be found in Castellani et al. [18], Hong et al. [19], Richman and Wuthrich [20], and Gabrielli and Wuthrich [21].

2. Introduction

From the previous works, we noticed that most of the studies on neural networks applications examine Long Short-Term Memory. The Bidirectional Long Short-Term Memory and Gated Recurrent Unit could also be applied to time series forecasting tasks. Most of the studies propose recurrent neural network models as an approach to estimate the time index in LC model. This paper focuses, instead, on a direct comparison to the mortality rate prediction results between the Lee–Carter model and deep learning models. Gabor Petnehazi and Jozsef Gall [22] proposed a comparison study on mortality prediction with the LSTM model and the Lee–Carter model on countries all around world. We expand this comparison to three recurrent neural networks, LSTM, Bi-LSTM, GRU, and the Lee–Carter model. We also noticed that most of the studies explore mortality rates of the different countries around the world, so we chose to apply our experiments on the nine census divisions in the United States, which shows extraordinary prediction results of the LC model with ARIMA.
One problem we identified is in regard of the selection of parameters of the recurrent neural networks; some studies chosen the same parameters for different models to make a direct comparison on the forecasting abilities. However, in this study, we chose parameters with the best forecasting performance and compared their forecasting results.
Therefore, the key contribution of this paper is its novel comparison study of the nine census divisions’ mortality rate predictions in the US using the Lee–Carter model, the Long Short-Term Memory model, the Bi-directional Long Short-Term Memory model, and the Gated Recurrent Unit. We measure the forecasting results by Mean Average Error (MAE) and Root-Mean-Square Error (RMSE) in an out-of-sample test.
The paper is organized in the following sections: The introduction of three different recurrent neural networks is presented in Section 3. The Lee–Carter model and Singular Value Decomposition (SVD) methods are presented in Section 4. Data features and data preprocessing information are shown in Section 5. Section 6 illustrates the numerical process of the experiments, and Section 7 offers the conclusion.

3. Recurrent Neural Networks

3.1. Long Short-Term Memory

The Long Short-Term Memory is an improved version of recurrent neural networks (RNN); it uses special units in addition to standard units. RNNs store information from the past by using the output of the previous unit as input. This means that it can cause problems when facing long-term data. As a result, Hochreiter and Schmidhuber [23] introduced a special RNN that could store long-term memory as well as discard useless memory. The special structure makes the Long Short-Term Memory model well suited to classification problems, regression problems, and especially on time-series tasks. In what follows, we investigated some mathematical functions in LSTM.
The sigmoid ( σ ) and the hyperbolic tangent (tanh) are the most frequently nonlinear activation functions in neural networks, as shown in Equations (1) and (2).
Sigmoid :   σ x = 1 1 + e x
Tan gent   hyperbolic :   tanh ( x ) = e x e x e x + e x
The sigmoid ( σ ) values range between 0 and 1, and it is used to decide if the information received should be uploaded and retained or be discarded and forgotten. Because any number that multiplies with 0 will be equal to 0, values may disappear or be considered as “forgotten”. Any number that multiplies with 1 will remain the same, so it is “kept” or retained.
The activation function tangent hyperbolic (tanh) ranges between −1 and 1, and it is used to regulate the values flowing through the neural network.
A common LSTM unit has a 3-gate mechanism as will be discussed in the following section.
Forget gate
The forget gate controls the degree of information loss from the previous cell state; in another words, it decides what information is dropped or kept. The previous information passes through the sigmoid function, and the numbers coming out of the forget gate are between 0 and 1. Values closer to 0 are more likely to be forgotten, and values closer to 1 are more likely to be kept. We denoted the forget gate as f t . Weight metrices for the forget gate are identified as W f   a n d   U f , and the hidden units as h t . The input variable x = x 1 ,   x 2 , , x T is a time series sequence at time t = 1, 2, …, T, from which we could obtain Equation (3):
  f t = σ W f x t + U f h t 1
Input gate
The input gate controls the degree of new information to store in the current cell. It uses a sigmoid layer to decide what information would be updated in the current cell state and then uses a tanh layer to create new vector for the cell state. We denoted the input gate as i t , the weight matrices for input gate as W i   and   U i , and the hidden units as h t . Thus, we have Equation (4):
i t = σ W i x t + U i h t 1
Cell State
After the previous works, the old cell state is updated with the information collected from the gates. This step is achieved by multiplying the old cell state with the weight matrix generated by the forget gate and filter the original information to decide the kept and dropped parts. Then, the results are multiplied in the input gate to obtain the new information, which is added to the cell state. We used C t to represent the current cell state, C t ˜ represents the candidate cell state, W c   and   U c represent the weight matrices for cell statement, and the hidden units are h t . The symbol is the Hadamard product. Here, we also provide a quick introduction of Hadamard product; for two matrices A and B with the same dimension, the Hadamard product is:
A B i j = A i j B i j
In the example of 3 × 3 matrix A and B, we have:
a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 = a 11 b 11 a 12 b 12 a 13 b 13 a 21 b 21 a 22 b 22 a 23 b 23 a 31 b 31 a 32 b 32 a 33 b 33
Hence, we have Equations (7) and (8) for cell state:
C t ˜ = tanh U c x t + W c h t 1
C t = f t C t 1 + i t C t ˜
Output gate
The output gate calculates the new output value of the current cell. The sigmoid layer is used again to generate the weight matrix and we used this matrix to decide what would be the output of the cell state. Then, the weight matrix is multiplied with the input results to output the part of the cell state. We denoted the output gate as o t , and the weight matrices for output gate as W o   and   U o , and we generated Equations (9) and (10):
o t = σ U o x t + W o h t 1
h t = o t tanh C t
The Us and the Ws are the weight matrices that help in applying the model to different length of input data. A key point of the weight matrices is that they do not change over time. The same weight matrices are used in every time steps. The weight matrices W f ,   W i ,   W o ,   W c have the same dimension i n p u t   d i m e n s i o n × o u t p u t   d i m e n s i o n and the weight matrices U f ,   U i ,   U o ,   U c have the same dimension o u t p u t   d i m e n s i o n × o u t p u t   d i m e n s i o n . These matrices are learned using a variant of the gradient descent algorithm. Moreover, we can calculate the network parameters for a single layer with the following equation:
p a r a m e t e r s = 4 × o u t p u t   d i m e n s i o n × o u t p u t   d i m e n s i o n + i n p u t   d i m e n s i o n + 1
In the end, a single LSTM unit structure is shown in Figure 1.

3.2. Bi-directional Long Short-Term Memory

A new Long Short-Term Memory model, named Bi-directional Long Short-Term Memory (Bi-LSTM), was proposed by Schuster and Paliwa [24]. It is an extension of LSTM, with the main difference between Bi-LSTM and LSTM being that, instead of one forward direction hidden layer, the Bi-LSTM model uses two similar hidden layers with opposite directions. In the one forward direction, Bi-LSTM learns in increasing order of sequence input and, in the backward direction, it learns the information decreasing order of the sequence input. This means that both past and future information is utilized. However, compared to LSTM, the Bi-LSTM model requires more to finish the training, so it presents a considerable challenge to practice.
Bi-LSTM performs well in natural language-processing problems, such as sentence classification and translation. It could also be applied in handwritten recognition problem, sequence problems, and similar fields.
A Bi-LSTM unit is the same as the LSTM unit, but the architecture is different. To show the difference, the architectures of the LSTM and Bi-LSTM are shown in Figure 2. We can see that both past and future information from the dataset is used.

3.3. Gated Recurrent Unit

The last type of recurrent neural network is the Gated Recurrent Unit (GRU), introduced by Kyunghyun Cho et al. [25]. It is similar to LSTM, but it has fewer parameters, gates, and equations. Generally speaking, it is really difficult to tell which network is the best model for the case. Other comparison studies between LSTM and GRU can be found in Chung et al. [26]. The GRU model merges the forget gate and input gate of LSTM models into a single updated gate. A Gated Recurrent Unit works according to Equations (12)–(15).
z t = σ W z x t + U z h t 1
r t = σ W r x t + U r h t 1
h t ˜ = t a n h W h x t + U h r t h t 1
h t = z t h t ˜ + 1 z t h t 1
z t denotes the update gate and r t denotes the reset gate; Ws and Us are the weight matrices; h t is the output information to the next unit; h t ˜ is the current cell state; x t denotes the input vector; and the Hadamard product is . The single gated recurrent unit structure is shown in Figure 3.

4. Lee–Carter Model

In this section, we discuss some important concepts regarding the Lee–Carter model by [1]. The Lee–Carter model is a demographic model that is widely used in mortality prediction and life expectancy forecasting for different countries. The Lee–Carter model implies a linear relationship between the central mortality rate of an age group m x , t and age interval x and year t. Equation (16) describes the model:
m x , t = exp α x + β x κ t
It can be rewritten as Equation (17):
ln m x , t = α x + β x κ t
where α x represents the specific average log-mortality rate, β x means the deviation in mortality due to age profile κ t variations, and κ t is the time index to the year t. Another is that the Lee–Carter model is subject to the constraints on the parameters, so we have (18).
x = x 1 x p β x ^ = 1   a n d   t = t 1 t n κ t ^ = 0
In practice, Singular Value Decomposition (SVD), Maximum Likelihood estimation (MLE), and Least Square (LS) are the three classical methods to estimate the parameters of the Lee–Carter model. In this paper, we applied the Singular Value Decomposition (SVD) approach to the Lee–Carter model and use the ARIMA process to estimate the time index k t .
The first step calculates the parameter α x , which is just the average values of raw ln m x , t (observation) over time, shown in Equation (19).
  α x ^ = 1 t t = t 1 t n ln m x , t
The estimation of β x and κ t is obtained by the singular value deposition of the matrix of ln m x , t α x . Here, we present a quick introduction of the singular value decomposition: first, we denote a matrix ln m x , t α x = A . Supposing that matrix A is a n × m matrix, then A could be computed as:
A n × m = U n × n D n × p V p × p
where the U and V are orthogonal, V’ represents the V transposed, and D has the same dimensions as A and has singular value. We calculated the β x ^ according to the following Equations (21) and (22):
  β x ^ = U x , 1 U x , 1
κ t ^ = V t , 1 D 1 , 1 x = x 1 x p U x , 1
All the SVD process can be achieved in R with the function SVD. For the forecasting of the time index κ t , we chose to use the traditional time series model ARIMA model with drift, which is discussed in an upcoming section.

5. Data

This study focused on mortality rates for the nine census divisions in the US: New England, Middle Atlantic, East North Central, West North Central, South Atlantic, East South Central, West South Central, Mountain, and Pacific. The data for the numerical experiment were collected from usa.mortality.org (the United States mortality database). This study was obtained on the life tables for nine census divisions in the US. This database provides the central mortality rates for 24 age groups (from 0 to 110+) by gender. These datasets were split into training and test sets with the rules of 80% training and 20% test. Due to the time series data, we could not randomly split the data set, so we picked the historical data as the training set and predicted the future mortality rates based on that. The total years and the corresponding testing set years of the data are shown in Table 1 and the average mortality rates by age groups and gender are shown in Table 2.

6. Numerical Process

After predicting the parameters α x , β x , and κ t ^ in the Lee–Carter model with the Singular Value Decomposition (SVD) method, we used the AutoRegressive Integrated Moving Average model (ARIMA) to predict the future κ t ^ . The process of finding the best ARIMA model to univariate time series was achieved on the R vision 4.2.1 with the auto ARIMA in the forecast package. This technique is based on the Hyndman–Khandakar algorithm by Hyndman and Khandakar [27]. The idea is using a unit root test to test the stationarity of the time series and choose the degree of differencing d, and then select the best degree of auto-regressive p and moving average order q by the 2 criterion Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). The best ARIMA (p,d,q) with drift for males and females are shown in the following Table 3 and Table 4, respectively.
We tried to find out the parameters with the best performance in neural networks (at least in some ranges). However, we did not use any systematic process to search for the most reasonable parameters. In the end, a simple neural networks architecture was used, which contains a hidden layer and a dense layer with a single unit. In the model compilation part, we proposed the optimizer Adam and the loss function is mean square error (MSE). The recurrent neural networks predictions were run in Python with the TensorFlow package and the neurons, batch size, epochs, and the dropout percent for neural networks for females and males by divisions are shown in Table 5 and Table 6, respectively.
To measure the prediction performance, we selected two error criteria for the out-of-sample test, mean absolute error (MAE), and root-mean-square error (RMSE). The equations of the MAE and RMSE are presented as Equations (23) and (24). The total amount of data in the test data are denoted by n, m x , t ^ represents the predicted mortality rate, and m x , t is the actual mortality rate.
M A E = 1 n t = 1 n m x , t ^ m x , t
R M S E = 1 n t = 1 n m x , t ^ m x , t 2
Ten consecutive results were collected by each recurrent neural network and the average MAE and RMSE are shown in Table 7 by gender and divisions.
Considering the average MAE and RMSE by genders, every recurrent neural network approach offers a comparable performance to the LC model. Among them, the GRU model shows the best performance on both genders. The GRU model showed a better MAE value and RMSE value than the LC model at 72.2% (13/18) and 67.7% (12/18) of the database, respectively, and the LSTM and Bi-LSTM provided 50%/38.9% (MAE/RMSE) and 61.1%/61.1% (MAE/RMSE), respectively. It is surprising that LSTM did not have the good performance we expected before the experiment.
We also compared the averaged performance by genders between the models. A summary of the averaged MAE and RMSE values is shown in Table 8.
We can see that the deep learning models have better performance on the female dataset. Simultaneously, the MAE and RMSE analysis showed that the LSTM and Bi-LSTM models are not effective on the male case prediction. Considering the average MAE and RMSE measurements, GRU offered the best prediction performance with 0.003946/0.008871 (MAE/RMSE). Examples of life expectancy predicted values are shown in Figure 4, Figure 5, Figure 6 and Figure 7, which considered both genders; the Mountain division confirmed this gender difference in the prediction performance. The first 40 years were the training set and the last 10 years were the test set. Here, we picked the age groups of 40–44 and 90–94, which, respectively, represent the middle age and elderly groups.
The results show that the deep learning models are capable of displaying more details of the dataset with a nonlinear trend. The LC model sometimes underrates or overrates the future mortality rate (in most cases, it underpredicts; see Bergeron-Boucher et al. [28] and Booth et al. [29]). When we consider data with rapid changes (see the example of the 90–94 age group/male), the mortality rates remained stable for a period (year 20–year 40), and no model successfully predicted the sudden decrease in the coming 10 years. The uncertainty of the future is still a challenge for the time series tasks.
We also considered the prediction in a single year; this is shown with the example of the New England division case. We chose the year 2015 with the predictions of mortality rate and log-mortality rate, as shown in Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13, by genders. Figure 8 and Figure 9 compare all the models in terms of mortality rate and log-mortality rate in one plot, and the remaining figures compare the log-mortality (y-axis) of LC, LSTM, Bi-LSTM, and GRU to the real data for all the age groups (x-axis), respectively.
We noticed an excessively parabolic trend at the bottom of the log-mortality rate bathtub curve with the deep learning models, especially on male population, such as LSTM (male) and GRU (male).

7. Conclusions and Discussion

In the present paper, we proposed three popular RNN models, the LSTM model, Bi-LSTM model, and GRU model, for forecasting mortality rates. The experiment was performed on nine census divisions of the US according to gender. The results of the proposed comparative study show that GRU model obtained the best overall prediction values in the models.
By examining the comparison between the neural networks and the LC model, could we say that the neural networks perform better than the LC model?
On the one hand, the Bi-LSTM and GRU models had a better performance in terms of MAE and RMSE. On trend prediction, compared to the linear model, better prediction curves were displayed by the neural networks. Due to their unique architectures, more details could be caught, memorized, and replicated in the trend prediction.
On the other hand, even the GRU model could not show a high accuracy level in the mortality rate trend prediction. In other words, the deep learning models do not significantly improve the accuracy of mortality rate prediction when compared to the LC model. Regarding the algorithm itself, neural networks do not have the simplicity and interpretability of the LC model. The deep learning models are driven by data, and their random outcomes lack demographic meaning.
Moreover, according to the experiment, we noticed that the neural networks have better prediction performance on the female population than on the male population in the United States.
This experiment could serve as a reference for other works with the following as some potential improvements worthy of consideration. Firstly, some existing studies showed that the LC model has a worse performance for long-term mortality rate prediction than the short-term prediction results. This is a problem of the fitting period selection; many studies prefer to choose a shorter period to avoid data volatility. For example, Hyndman and Booth [30] used 1950 as the starting year to avoid the difficulties of the war years and the 1918 Spanish influenza pandemic. Other related studies are by Tuljapurkar et al. [31] and Lee and Miller [32]. The selected time period in our study was 1966–2015 or 50 years. The annual mortality rate training set was not be considered long term demographically, especially avoiding the excess mortality rates by two world wars and the COVID-19 pandemic. That is one of the reasons that the LC model showed an incredible forecasting. Second, some of the existing research implied that achieving the prediction of log-mortality rates might demonstrate a better performance than on the mortality rate itself (the objective of the LC model). Third, according to the structure of the LC model, parameters α x and β x are determined by data; they are the constant coefficients in the LC model, so the comparison of the prediction resembles more an indirect comparison of the best ARIMA models and neural networks.
Specifically, the uncertainty in the recurrent neural networks could be considered the most considerable challenge in the applications. The recurrent neural networks can provide predictions without any indication of variability. Some researchers aim to solve this problem through the construction of a confidence interval, such as Keren et al. [33], who proposed empirical calibration and temperature scaling for acquiring calibrated predictions intervals for neural network regressors. Khosravi et al. [34] wrote a comprehensive review for the prediction intervals. Several techniques are mentioned, such as bootstraps and Bayesian methods, but they have high computing expenses; these studies can be found in the works of Efron and Tibshirani [35], Dietterich [36], Heskes [37], and Gábor Petneházi [38]. According to these studies, there does not exist a reliable method to handle this problem on time series tasks to date. However, we can consider the results of the recurrent neural networks as good candidates for predicting mortality trends in the future.
Regarding future works, we believe that mortality rate trend prediction can be improved by combining other stochastic mortality models with more deep learning models or by testing the neural networks on different data. One popular study replaces the ARIMA model with the deep learning models and builds a LC-RNN model. As we mentioned, most of the studies focus on LSTM, but other deep learning models should be applied to the field of mortality prediction.

Author Contributions

Conceptualization, Y.C. and A.Q.M.K.; methodology, A.Q.M.K.; software, Y.C.; validation, A.Q.M.K. and Y.C.; formal analysis, Y.C.; investigation, Y.C.; data curation, A.Q.M.K.; writing—original draft preparation, Y.C.; writing—review and editing, A.Q.M.K.; visualization, Y.C.; supervision, A.Q.M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Human Mortality Database. University of California, Berkeley (USA) and Max Planck Institute for Demographic Research (Germany). Available online: http://www.mortality.org (accessed on 15 September 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, R.; Carter, L. Modeling and Forecasting U.S. Mortality. J. Am. Stat. Assoc. 1992, 87, 659–671. [Google Scholar] [CrossRef]
  2. Brouns, N.; Denuit, M.; Vermunt, J. A Poisson log-bilinear regression approach to the construction of projected lifetables. Insur. Math. Econ. 2002, 31, 373–393. [Google Scholar] [CrossRef] [Green Version]
  3. Hyndman, R.J.; Ullah, M.S. Robust forecasting of mortality and fertility rates: A functional data approach. Comput. Stat. Data Anal. 2007, 51, 4942–4956. [Google Scholar] [CrossRef] [Green Version]
  4. Shang, H.L. Dynamic principal component regression: Application to age-specific mortality forecasting. ASTIN Bull. 2019, 49, 619–645. [Google Scholar] [CrossRef] [Green Version]
  5. Cairns, A.; Blake, D.; Dowd, K. A Two-Factor Model for Stochastic Mortality with Parameter Uncertainty: Theory and Calibration. J. Risk Insur. 2006, 73, 687–718. [Google Scholar] [CrossRef]
  6. Renshaw, A.; Haberman, S. A cohort-based extension to the Lee–Carter model for mortality reduction factors. Insur. Math. Econ. 2006, 38, 556–570. [Google Scholar] [CrossRef]
  7. Deprez, P.; Shevchenko, P.; Wüthrich, M. Machine learning techniques for mortality modeling. Eur. Actuar. J. 2017, 7, 337–352. [Google Scholar] [CrossRef]
  8. Levantesi, S.; Pizzorusso, V. Application of Machine Learning to Mortality Modeling and Forecasting. Risks 2019, 7, 26. [Google Scholar] [CrossRef] [Green Version]
  9. Hassani, H.; Unger, S.; Beneki, C. Big Data and Actuarial Science. Big Data Cogn. Comput. 2020, 4, 40. [Google Scholar] [CrossRef]
  10. Richman, R. AI in Actuarial Science. 24 July 2018. Available online: https://ssrn.com/abstract=3218082 (accessed on 10 October 2022).
  11. Hainaut, D. A neural-network analyzer for mortality forecast. ASTIN Bull. 2018, 48, 481–508. [Google Scholar] [CrossRef]
  12. Perla, F.; Richman, R.; Scognamiglio, S.; Wüthrich, M.V. Time-series forecasting of mortality rates using deep learning. Scand. Actuar. J. 2021, 2021, 572–598. [Google Scholar] [CrossRef]
  13. Human Mortality Database. University of California, Berkeley (USA) and Max Planck Institute for Demographic Research (Germany). Available online: http://www.mortality.org (accessed on 10 October 2022).
  14. Nigri, A.; Levantesi, S.; Marino, M.; Scognamiglio, S.; Perla, F. A Deep Learning Integrated Lee–Carter Model. Risks 2019, 7, 33. [Google Scholar] [CrossRef] [Green Version]
  15. Nigri, A.; Levantesi, S.; Marino, M. Life expectancy and lifespan disparity forecasting: A long short-term memory approach. Scand. Actuar. J. 2020, 2021, 110–133. [Google Scholar] [CrossRef]
  16. Marino, M.; Levantesi, S. Measuring Longevity Risk through a Neural Network Lee-Carter Model. 15 March 2020. Available online: https://ssrn.com/abstract=3599821 (accessed on 10 October 2022).
  17. Richman, R.; Wuthrich, M.V. Lee and Carter Go Machine Learning: Recurrent Neural Networks. 22 August 2019. Available online: https://ssrn.com/abstract=3441030 (accessed on 10 October 2022).
  18. Castellani, G.; Fiore, U.; Marino, Z.; Passalacqua, L.; Perla, F.; Scognamiglio, S.; Zanetti, P. An Investigation of Machine Learning Approaches in the Solvency II Valuation Framework. 2018. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3303296 (accessed on 10 October 2022).
  19. Hong, W.H.; Yap, J.H.; Selvachandran, G.; Thong, P.H.; Son, L.H. Forecasting mortality rates using hybrid Lee–Carter model, artificial neural network and random forest. Complex Intell. Syst. 2021, 7, 163–189. [Google Scholar] [CrossRef]
  20. Richman, R.; Wuthrich, M.V. A Neural Network Extension of the Lee-Carter Model to Multiple Populations. 22 October 2018. Available online: https://ssrn.com/abstract=3270877 (accessed on 10 October 2022).
  21. Gabrielli, A.; Wüthrich, M.V. An Individual Claims History Simulation Machine. Risks 2018, 6, 29. [Google Scholar] [CrossRef] [Green Version]
  22. Petneházi, G.; Gáll, J. Mortality rate forecasting: Can recurrent neural networks beat the Lee-Carter model? arXiv 2022. [Google Scholar] [CrossRef]
  23. Hocreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  24. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef] [Green Version]
  25. Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  26. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  27. Hyndman, R.J.; Khandakar, Y. Automatic Time Series Forecasting: The forecast Package for R. J. Stat. Softw. 2008, 27, 1–22. [Google Scholar] [CrossRef] [Green Version]
  28. Bergeron-Boucher, M.-P.; Canudas-Romo, V.; Oeppen, J.E.; Vaupel, J. Coherent forecasts of mortality with compositional data analysis. Demogr. Res. 2017, 37, 527–566. [Google Scholar] [CrossRef]
  29. Booth, H.; Hyndman, R.J.; Tickle, L.; De Jong, P. Lee-Carter mortality forecasting: A multi-country comparison of variants and extensions. Demogr. Res. 2006, 15, 289–310. [Google Scholar] [CrossRef]
  30. Hyndman, R.J.; Booth, H. Stochastic population forecasts using functional data models for mortality, fertility and migration. Int. J. Forecast. 2008, 24, 323–342. [Google Scholar] [CrossRef]
  31. Tuljapurkar, S.; Li, N.; Boe, C. A universal pattern of mortality decline in the G7 countries. Nature 2000, 405, 789–792. [Google Scholar] [CrossRef] [PubMed]
  32. Lee, R.D.; Miller, T. Evaluating the Performance of the Lee-Carter Method for Forecasting Mortality. Demography 2001, 38, 537–549. [Google Scholar] [CrossRef]
  33. Keren, G.; Cummins, N.; Schuller, B. Calibrated Prediction Intervals for Neural Network Regressors. IEEE Access 2018, 6, 54033–54041. [Google Scholar] [CrossRef]
  34. Khosravi, A.; Nahavandi, S.; Creighton, D.; Atiya, A.F. Comprehensive Review of Neural Network-Based Prediction Intervals and New Advances. IEEE Trans. Neural Netw. 2011, 22, 1341–1356. [Google Scholar] [CrossRef]
  35. Efron, B.; Tibshirani, R. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Stat. Sci. 1986, 1, 54–75. [Google Scholar] [CrossRef]
  36. Dietterich, T. Ensemble learning. In The Handbook of Brain Theory and Neural Networks; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  37. Heskes, T. Practical confidence and prediction intervals. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1997; pp. 176–182. [Google Scholar]
  38. Petneházi, G. Recurrent Neural Networks for Time Series Forecasting. arXiv 2018, arXiv:1901.00069. [Google Scholar]
Figure 1. LSTM unit structure.
Figure 1. LSTM unit structure.
Bdcc 06 00134 g001
Figure 2. LSTM architecture vs. Bi-LSTM architecture.
Figure 2. LSTM architecture vs. Bi-LSTM architecture.
Bdcc 06 00134 g002
Figure 3. GRU unit structure.
Figure 3. GRU unit structure.
Bdcc 06 00134 g003
Figure 4. The life expectancy of models in the age group 40–44, Mountain female population.
Figure 4. The life expectancy of models in the age group 40–44, Mountain female population.
Bdcc 06 00134 g004
Figure 5. The life expectancy of models in the age group 40–44, Mountain male population.
Figure 5. The life expectancy of models in the age group 40–44, Mountain male population.
Bdcc 06 00134 g005
Figure 6. The life expectancy of models in the age group 90–94, Mountain female population.
Figure 6. The life expectancy of models in the age group 90–94, Mountain female population.
Bdcc 06 00134 g006
Figure 7. The life expectancy of models in the age group 90–94, Mountain male population.
Figure 7. The life expectancy of models in the age group 90–94, Mountain male population.
Bdcc 06 00134 g007
Figure 8. The predictions of the mortality rate for New England female (left) and male (right), 2015.
Figure 8. The predictions of the mortality rate for New England female (left) and male (right), 2015.
Bdcc 06 00134 g008
Figure 9. The predictions of the log-mortality rate for New England female (left) and male (right), 2015.
Figure 9. The predictions of the log-mortality rate for New England female (left) and male (right), 2015.
Bdcc 06 00134 g009
Figure 10. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: LC (red) vs. actual (blue).
Figure 10. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: LC (red) vs. actual (blue).
Bdcc 06 00134 g010
Figure 11. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: LSTM (red) vs. real (blue).
Figure 11. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: LSTM (red) vs. real (blue).
Bdcc 06 00134 g011
Figure 12. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: Bi-LSTM (red) vs. real (blue).
Figure 12. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: Bi-LSTM (red) vs. real (blue).
Bdcc 06 00134 g012
Figure 13. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: GRU (red) vs. real (blue).
Figure 13. The predictions of the log-mortality rate for New England female (left) and male (right), 2015: GRU (red) vs. real (blue).
Bdcc 06 00134 g013
Table 1. Total and testing set years by regions.
Table 1. Total and testing set years by regions.
Census DivisionTotal YearsTesting Set Years
New England1966–20152006–2015
Middle Atlantic1966–20152006–2015
East North Central1966–20152006–2015
West North Central1966–20152006–2015
South Atlantic1966–20152006–2015
East South Central1966–20152006–2015
West South Central1966–20152006–2015
Mountain1966–20152006–2015
Pacific1966–20152006–2015
Table 2. Average mortality rates by age groups and gender.
Table 2. Average mortality rates by age groups and gender.
Age GroupMaleFemale
00.01031840.008126
1–40.00039780.0003192
5–90.0002240.0001586
10–140.000240.0001582
15–190.00087360.0003356
20–240.00126760.0004144
25–290.00127940.0004938
30–340.00146820.0006756
35–390.00194840.0010024
40–440.00288460.0015864
45–490.0045140.0025628
50–540.00716440.0039904
55–590.0113960.0061842
60–640.01797720.0097272
65–690.02772160.0151632
70–740.04255120.0242644
75–790.0650190.0394208
80–840.1002580.065534
85–890.15657260.111995
90–940.24016840.1854848
95–990.34709920.289526
100–1040.47230960.4209162
105–1090.6012850.5649978
110+0.70132680.680411
Table 3. Best ARIMA(p,d,q) for females.
Table 3. Best ARIMA(p,d,q) for females.
Census DivisionBest ARIMA(p,d,q)
New EnglandARIMA(1,1,1)
Middle AtlanticARIMA(2,1,0)
East North CentralARIMA(0,1,0)
West North CentralARIMA(1,1,2)
South AtlanticARIMA(0,1,0)
East South CentralARIMA(2,2,1)
West South CentralARIMA(0,1,1)
MountainARIMA(1,2,1)
PacificARIMA(0,1,0)
Table 4. Best ARIMA(p,d,q) for males.
Table 4. Best ARIMA(p,d,q) for males.
Census DivisionBest ARIMA(p,d,q)
New EnglandARIMA(0,1,0)
Middle AtlanticARIMA(0,1,0)
East North CentralARIMA(0,1,0)
West North CentralARIMA(1,1,0)
South AtlanticARIMA(0,1,0)
East South CentralARIMA(0,1,0)
West South CentralARIMA(0,1,0)
MountainARIMA(0,1,0)
PacificARIMA(0,1,0)
Table 5. The number of neurons, batch size, epochs, and dropout percent for the female database by divisions.
Table 5. The number of neurons, batch size, epochs, and dropout percent for the female database by divisions.
Census DivisionNeuronsBatch SizeEpochsDropout
New England
LSTM128325020%
Bi-LSTM128325020%
GRU128325020%
Middle Atlantic
LSTM643210030%
Bi-LSTM643210030%
GRU646430030%
East North Central
LSTM641615010%
Bi-LSTM641615010%
GRU641615010%
West North Central
LSTM128325020%
Bi-LSTM128325020%
GRU128325020%
South Atlantic
LSTM1281615010%
Bi-LSTM1281615010%
GRU1281630020%
East South Central
LSTM1283230010%
Bi-LSTM1283230030%
GRU1283230010%
West South Central
LSTM1286430010%
Bi-LSTM646430010%
GRU1281630010%
Mountain
LSTM128325020%
Bi-LSTM128325020%
GRU128325020%
Pacific
LSTM1283210020%
Bi-LSTM1283210020%
GRU1283210020%
Table 6. The number of neurons, batch size, epochs, and dropout percent for the male database by divisions.
Table 6. The number of neurons, batch size, epochs, and dropout percent for the male database by divisions.
Census DivisionNeuronsBatch SizeEpochsDropout
New England
LSTM321630010%
Bi-LSTM321630010%
GRU1283215020%
Middle Atlantic
LSTM128326030%
Bi-LSTM64326030%
GRU641610030%
East North Central
LSTM128645010%
Bi-LSTM128645010%
GRU128645010%
West North Central
LSTM641615030%
Bi-LSTM1283215030%
GRU643215030%
South Atlantic
LSTM643230010%
Bi-LSTM643230010%
GRU643230010%
East South Central
LSTM128165030%
Bi-LSTM128325030%
GRU643230030%
West South Central
LSTM64163010%
Bi-LSTM32163010%
GRU641610030%
Mountain
LSTM1283210020%
Bi-LSTM1283210020%
GRU1283210020%
Pacific
LSTM323230030%
Bi-LSTM32165030%
GRU643230030%
Table 7. MAE and RMSE for the LC, LSTM, Bi-LSTM, and GRU by gender and divisions.
Table 7. MAE and RMSE for the LC, LSTM, Bi-LSTM, and GRU by gender and divisions.
Census DivisionFemaleMale
New EnglandMAERMSEMAERMSE
LC0.0035800.00857740.00381450.007061
LSTM0.0033330.00775810.0036020.007446
Bi-LSTM0.0035590.00845230.0042800.008178
GRU0.0032220.0075910.0042500.009505
Middle AtlanticMAERMSEMAERMSE
LC0.0022960.00554940.0034190.0064182
LSTM0.0054790.0121040.00364230.0070882
Bi-LSTM0.0046090.01078340.00453920.0093438
GRU0.0049570.01153750.00245760.0048186
East North CentralMAERMSEMAERMSE
LC0.0044580.01060130.00427960.0081338
LSTM0.0027420.00540240.00508550.0117587
Bi-LSTM0.0026670.00560340.00564780.0103892
GRU0.0035310.00802380.00451460.0104677
West North CentralMAERMSEMAERMSE
LC0.0063130.01470760.00587090.0123197
LSTM0.0045410.01045020.00503200.0095187
Bi-LSTM0.0042250.01003990.00386130.0073576
GRU0.0043780.01043720.00299620.0055895
South AtlanticMAERMSEMAERMSE
LC0.0042490.01006730.00434210.007902
LSTM0.0041620.00961630.00656450.0129754
Bi-LSTM0.0035370.00793310.00414430.0077775
GRU0.0045250.01036440.00422790.0087472
East South CentralMAERMSEMAERMSE
LC0.0059190.01379480.0060560.0121139
LSTM0.0063890.01542770.00624940.0135819
Bi-LSTM0.0066300.01613390.00437640.0091593
GRU0.0062370.01505680.0033440.0074549
West South CentralMAERMSEMAERMSE
LC0.0038810.00949940.0044010.008112
LSTM0.0029770.00670810.0083260.0186121
Bi-LSTM0.0027700.00610350.00880420.0187601
GRU0.0038140.00897010.00317010.0062397
MountainMAERMSEMAERMSE
LC0.0058750.01360750.00586310.0116347
LSTM0.0054740.01295070.00558290.0130083
Bi-LSTM0.0052560.01248470.00373390.0076257
GRU0.0051580.01235610.00487000.0112312
PacificMAERMSEMAERMSE
LC0.003030.00632910.00385620.0073431
LSTM0.003370.00694030.00457880.0090681
Bi-LSTM0.0026470.00543520.00554150.0105694
GRU0.0024530.00566400.00292440.0056143
Table 8. The averaged MAE and RMSE values for the models by genders.
Table 8. The averaged MAE and RMSE values for the models by genders.
ModelMAE
Female
MAE
Male
RMSE
Female
RMSE
Male
Averaged
MAE
Averaged
RMSE
LC0.00440.046560.0103040.0090040.0045280.009654
LSTM0.0042740.0054070.0097060.0114510.0048410.010579
Bi-LSTM0.0039890.0049920.0092190.0099070.004490.009563
GRU0.0042530.0036390.0100000.0077410.0039460.008871
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.; Khaliq, A.Q.M. Comparative Study of Mortality Rate Prediction Using Data-Driven Recurrent Neural Networks and the Lee–Carter Model. Big Data Cogn. Comput. 2022, 6, 134. https://doi.org/10.3390/bdcc6040134

AMA Style

Chen Y, Khaliq AQM. Comparative Study of Mortality Rate Prediction Using Data-Driven Recurrent Neural Networks and the Lee–Carter Model. Big Data and Cognitive Computing. 2022; 6(4):134. https://doi.org/10.3390/bdcc6040134

Chicago/Turabian Style

Chen, Yuan, and Abdul Q. M. Khaliq. 2022. "Comparative Study of Mortality Rate Prediction Using Data-Driven Recurrent Neural Networks and the Lee–Carter Model" Big Data and Cognitive Computing 6, no. 4: 134. https://doi.org/10.3390/bdcc6040134

Article Metrics

Back to TopTop