# DeepTriangle: A Deep Learning Approach to Loss Reserving

## Abstract

**:**

## 1. Introduction

## 2. Neural Network Preliminaries

## 3. Data and Model Architecture

#### 3.1. Data Source

#### 3.2. Training/Testing Setup

#### 3.3. Response and Predictor Variables

#### 3.4. Model Architecture

#### 3.4.1. Multi-Task Learning

#### 3.4.2. Sequential Input Processing

#### 3.4.3. Company Code Embeddings

#### 3.4.4. Fully Connected Layers and Outputs

#### 3.5. Deployment Considerations

## 4. Experiments

#### 4.1. Evaluation Metrics

#### 4.2. Implementation and Training

#### 4.3. Results and Discussion

## 5. Conclusions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Abadi, Martín, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, and et al. 2015. ensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. arXiv arXiv:1603.04467. [Google Scholar]
- Avanzi, Benjamin, Greg Taylor, Phuong Anh Vu, and Bernard Wong. 2016. Stochastic loss reserving with dependence: A flexible multivariate tweedie approach. Insurance: Mathematics and Economics 71: 63–78. [Google Scholar] [CrossRef]
- Caruana, Rich. 1997. Multitask learning. Machine Learning 28: 41–75. [Google Scholar] [CrossRef]
- Cheng, Heng-Tze, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, Hemal Shah, Levent Koc, Jeremiah Harmsen, and et al. 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems—DLRS 2016, Boston, MA, USA, September 15. [Google Scholar] [CrossRef]
- Chollet, Francois, and Joseph J. Allaire. 2018. Deep Learning with R. Shelter Island: Manning Publications. [Google Scholar]
- Chollet, François, and Joseph J. Allaire. 2017. R Interface to Keras. Available online: https://github.com/rstudio/keras (accessed on 7 September 2019).
- Chukhrova, Nataliya, and Arne Johannssen. 2017. State space models and the kalman-filter in stochastic claims reserving: Forecasting, filtering and smoothing. Risks 5: 30. [Google Scholar] [CrossRef]
- Chung, Junyoung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv arXiv:1412.3555. [Google Scholar]
- England, Peter D., and Richard J. Verrall. 2002. Stochastic claims reserving in general insurance. British Actuarial Journal 8: 443–518. [Google Scholar] [CrossRef]
- Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2001. The Elements of Statistical Learning. New York: Springer. [Google Scholar]
- Gabrielli, Andrea. 2019. A Neural Network Boosted Double Over-Dispersed Poisson Claims Reserving Model. Available online: https://ssrn.com/abstract=3365517 (accessed on 15 September 2019).
- Gabrielli, Andrea, Ronald Richman, and Mario V. Wuthrich. 2018. Neural Network Embedding of the Over-Dispersed Poisson Reserving Model. Available online: https://ssrn.com/abstract=3288454 (accessed on 15 September 2019).
- Gabrielli, Andrea, and Mario V. Wüthrich. 2018. An individual claims history simulation machine. Risks 6: 29. [Google Scholar] [CrossRef]
- Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. Cambridge: MIT Press. [Google Scholar]
- Guo, Cheng, and Felix Berkhahn. 2016. Entity embeddings of categorical variables. arXiv arXiv:1604.06737. [Google Scholar]
- Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, December 4–9. [Google Scholar]
- LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521: 436. [Google Scholar] [CrossRef] [PubMed]
- Mack, Thomas. 1993. Distribution-free calculation of the standard error of chain ladder reserve estimates. ASTIN Bulletin 23: 213–25. [Google Scholar] [CrossRef]
- Martinek, László. 2019. Analysis of stochastic reserving models by means of naic claims data. Risks 7: 62. [Google Scholar] [CrossRef]
- Meyers, Glenn. 2015. Stochastic Loss Reserving Using Bayesian MCMC Models. Arlington: Casualty Actuarial Society. [Google Scholar]
- Meyers, Glenn, and Peng Shi. 2011. Loss Reserving Data Pulled from NAIC Schedule p. Available online: http://www.casact.org/research/index.cfm?fa=loss_reserves_data (accessed on 7 September 2019).
- Miranda, María Dolores Martínez, Jens Perch Nielsen, and Richard Verrall. 2012. Double chain ladder. ASTIN Bulletin: The Journal of the IAA 42: 59–76. [Google Scholar]
- Nair, Vinod, and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, June 21–24. [Google Scholar]
- Peremans, Kris, Stefan Van Aelst, and Tim Verdonck. 2018. A robust general multivariate chain ladder method. Risks 6: 108. [Google Scholar] [CrossRef]
- Quarg, Gerhard, and Thomas Mack. 2004. Munich chain ladder. Blätter der DGVFM 26: 597–630. [Google Scholar] [CrossRef]
- Reddi, Sashank J., Satyen Kale, and Sanjiv Kumar. 2018. On the convergence of adam and beyond. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, April 30–May 3. [Google Scholar]
- Richman, Ronald, and Mario V. Wuthrich. 2018. A Neural Network Extension of the Lee-Carter Model to Multiple Populations. Available online: https://ssrn.com/abstract=3270877 (accessed on 7 September 2019).[Green Version]
- Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15: 1929–58. [Google Scholar]
- Srivastava, Nitish, Elman Mansimov, and Ruslan Salakhutdinov. 2015. Unsupervised learning of video representations using LSTMs. arXiv arXiv:1502.04681. [Google Scholar]
- Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the Advances in Neural Information Processing Systems 27, Montreal, QC, Canada, December 8–13. [Google Scholar]
- The H2O.ai team. 2018. h2o: R Interface for H2O. R Package Version 3.20.0.8. Mountain View: H2O.ai. [Google Scholar]
- Wüthrich, Mario V. 2018a. Machine learning in individual claims reserving. Scandinavian Actuarial Journal, 1–16. [Google Scholar] [CrossRef]
- Wüthrich, Mario V. 2018b. Neural networks applied to chain–ladder reserving. European Actuarial Journal 8: 407–36. [Google Scholar]
- Wuthrich, Mario V., and Christoph Buser. 2019. Data analytics for non-life insurance pricing. Swiss Finance Institute Research Paper. [Google Scholar] [CrossRef]

1 | A portmanteau of deep learning and loss development triangle. |

2 | Note the use of angle brackets to index position in a sequence rather than layers in a feedforward neural network as in Section 2. |

3 |

**Figure 2.**DeepTriangle architecture. Embed denotes embedding layer, GRU denotes gated recurrent unit, FC denotes fully connected layer.

**Table 1.**Performance comparison of various models. DeepTriangle and AutoML are abbreviated to DT and ML, respectively. The best metric for each line of business is in bold.

Line of Business | Mack | ODP | CIT | LIT | ML | DT |
---|---|---|---|---|---|---|

MAPE | ||||||

Commercial Auto | 0.060 | 0.217 | 0.052 | 0.052 | 0.068 | 0.043 |

Other Liability | 0.134 | 0.223 | 0.165 | 0.152 | 0.142 | 0.109 |

Private Passenger Auto | 0.038 | 0.039 | 0.038 | 0.040 | 0.036 | 0.025 |

Workers’ Compensation | 0.053 | 0.105 | 0.054 | 0.054 | 0.067 | 0.046 |

RMSPE | ||||||

Commercial Auto | 0.080 | 0.822 | 0.076 | 0.074 | 0.096 | 0.057 |

Other Liability | 0.202 | 0.477 | 0.220 | 0.209 | 0.181 | 0.150 |

Private Passenger Auto | 0.061 | 0.063 | 0.057 | 0.060 | 0.059 | 0.039 |

Workers’ Compensation | 0.079 | 0.368 | 0.080 | 0.080 | 0.099 | 0.067 |

© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kuo, K.
DeepTriangle: A Deep Learning Approach to Loss Reserving. *Risks* **2019**, *7*, 97.
https://doi.org/10.3390/risks7030097

**AMA Style**

Kuo K.
DeepTriangle: A Deep Learning Approach to Loss Reserving. *Risks*. 2019; 7(3):97.
https://doi.org/10.3390/risks7030097

**Chicago/Turabian Style**

Kuo, Kevin.
2019. "DeepTriangle: A Deep Learning Approach to Loss Reserving" *Risks* 7, no. 3: 97.
https://doi.org/10.3390/risks7030097