Explaining Denials: Adverse Action Codes and Machine Learning in Credit Decisioning
46 Pages Posted: 17 Jun 2022
Date Written: June 10, 2022
Abstract
As ML/AI usage expands to a variety of high-stakes decisioning, there is an increasing need to provide transparent explanations. In credit decisioning, regulation has long stipulated the provision of "Adverse Action Codes" (AACs) explaining denial of credit. However, there is potentially a wide range of acceptable AAC methodologies that could be used. This paper compares AACs derived from four common methods - an axiomatically-backed Shapley-based approach, a Most Points Lost-based approach, a difference from Mean approach, and a Univariate binning approach - based on two XGBoost risk models predicting credit card and mortgage risk respectively. We find that, overall, the Univariate approach deviates the most from the Shapley-based approach, all derived differences are "significant" within our novel placebo testing framework, differences are more pronounced for lower-risk customers on the border of the reject boundary, and the Most Points Lost and Mean approach are less robust to data perturbations than Shapley.
Keywords: Machine Learning Explainability, Credit Risk Modeling, Shapley, Adverse Action Codes
JEL Classification: D81, C4, G5
Suggested Citation: Suggested Citation