Shapley-Lorenz Decompositions in eXplainable Artificial Intelligence

15 Pages Posted: 13 Mar 2020

Date Written: February 29, 2020

Abstract

Explainability of artificial intelligence models has become a crucial issue, especially in the most regulated fields, such as health and finance. In this paper, we provide a global explainable AI model which is based on Lorenz decompositions, thus extending previous contributions based on variance decompositions. This allows the resulting Shapley Lorenz decomposition to be more generally applicable, and provides a unifying variable importance criteria that combines predictive accuracy with explainability, using a normalised and easy to interpret metric. The proposed decomposition is illustrated within the context of a real financial problem: the prediction of bitcoin prices.

Keywords: Shapley values, Lorenz Zonoids, Predictive accuracy

JEL Classification: C45, C53, C58, G32

Suggested Citation

Giudici, Paolo and Raffinetti, Emanuela, Shapley-Lorenz Decompositions in eXplainable Artificial Intelligence (February 29, 2020). Available at SSRN: https://ssrn.com/abstract=3546773 or http://dx.doi.org/10.2139/ssrn.3546773

Paolo Giudici (Contact Author)

University of Pavia ( email )

Via San Felice 7
27100 Pavia, 27100
Italy

Emanuela Raffinetti

University of Pavia ( email )

Via San Felice 5
Pavia, 27100
Italy

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
230
Abstract Views
812
Rank
241,336
PlumX Metrics