Eliminating unintended bias in personalized policies using Bias Eliminating Adapted Trees (BEAT)

41 Pages Posted: 21 Aug 2021 Last revised: 12 Jan 2022

See all articles by Eva Ascarza

Eva Ascarza

Harvard Business School

Ayelet Israeli

Harvard Business School - Marketing Unit

Date Written: December 4, 2021

Abstract

An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those ``protected'' attributes. This unintended discrimination is often caused by underlying correlations in the data between protected attributes and other observed characteristics used by the algorithm (or machine learning (ML) tool) to create predictions and target individuals optimally. Because these correlations are hidden in high dimensional data, removing protected attributes from the database does not solve the discrimination problem; instead, removing those attributes often exacerbates the problem by making it undetectable and, in some cases, even increases the bias generated by the algorithm.

We propose BEAT (Bias-Eliminating Adapted Trees) to address these issues. This approach allows decision makers to target individuals based on differences in their predicted behavior—hence capturing value from personalization— while ensuring a balanced allocation of resources across individuals, both group and individual fairness. Essentially, the method only extracts heterogeneity in the data that is unrelated to protected attributes. To do so, we build on the General Random Forest (GRF) framework (Wager and Athey 2018; Athey et al. 2019) and develop a targeting allocation that is ``balanced'' with respect to protected attributes. We validate BEAT using simulations and an online experiment with N=3,146 participants. This approach can be applied to any type of allocation decision that is based on prediction algorithms, such as medical treatments, hiring decisions, product recommendations, or dynamic pricing.


Keywords: Algorithmic bias, personalization, targeting, generalized random forests (GRF), fairness, discrimination

JEL Classification: C53, C54, C55, M31

Suggested Citation

Ascarza, Eva and Israeli, Ayelet, Eliminating unintended bias in personalized policies using Bias Eliminating Adapted Trees (BEAT) (December 4, 2021). Harvard Business School Marketing Unit Working Paper No. 22-010, Available at SSRN: https://ssrn.com/abstract=3908088 or http://dx.doi.org/10.2139/ssrn.3908088

Eva Ascarza (Contact Author)

Harvard Business School ( email )

Soldiers Field
Boston, MA 02163
United States

HOME PAGE: http://evaascarza.com

Ayelet Israeli

Harvard Business School - Marketing Unit ( email )

Soldiers Field
Boston, MA 02163
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
243
Abstract Views
1,766
Rank
228,791
PlumX Metrics