Insurance Discrimination and Fairness in Machine Learning: An Ethical Analysis
Michele Loi and Markus Christen, “Choosing How to Discriminate: Navigating Ethical Trade-Offs in Fair Algorithmic Design for the Insurance Sector,” Philosophy & Technology, March 13, 2021, https://doi.org/10.1007/s13347-021-00444-9.
Posted: 28 Aug 2019 Last revised: 5 May 2021
Date Written: August 17, 2019
Abstract
Here we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts and business managers. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g. medical diagnosis, sentencing). Moreover, the computer science literature has demonstrated the existence of a trade-off in the extent to which one can pursue non- discrimination versus predictive accuracy. Again the moral assessment of this trade-off is related to the context of application.
Keywords: insurance, discrimination, big data, fairness in machine learning, ethics
Suggested Citation: Suggested Citation