From Generalized Linear Models to Neural Networks, and Back

This paper has been integrated into SSRN Manuscript 3822407

Posted: 9 Dec 2019 Last revised: 24 Nov 2021

Date Written: December 11, 2019

Abstract

We present how to enhance classical generalized linear models by neural network features. On the way there, we highlight the traps and pitfalls that need to be avoided to get good statistical models. This includes the non-uniqueness of sufficiently good regression models, the balance property, and representation learning, which brings us back to the concept of the good old generalized linear models. This paper has been integrated into SSRN Manuscript 3822407.

Keywords: generalized linear model, GLM, neural network, regression modeling, exponential dispersion family, deviance loss, balance property, canonical link, representation learning, regularization, LASSO, claims frequency modeling

JEL Classification: G22, C45, C52, C53

Suggested Citation

Wuthrich, Mario V., From Generalized Linear Models to Neural Networks, and Back (December 11, 2019). This paper has been integrated into SSRN Manuscript 3822407, Available at SSRN: https://ssrn.com/abstract=3491790 or http://dx.doi.org/10.2139/ssrn.3491790

Mario V. Wuthrich (Contact Author)

RiskLab, ETH Zurich ( email )

Department of Mathematics
Ramistrasse 101
Zurich, 8092
Switzerland

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
4,619
PlumX Metrics