Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Shapley explainability on the data manifold

About

Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions. The Shapley framework for explainability attributes a model's predictions to its input features in a mathematically principled and model-agnostic way. However, general implementations of Shapley explainability make an untenable assumption: that the model's features are uncorrelated. In this work, we demonstrate unambiguous drawbacks of this assumption and develop two solutions to Shapley explainability that respect the data manifold. One solution, based on generative modelling, provides flexible access to data imputations; the other directly learns the Shapley value-function, providing performance and stability at the cost of flexibility. While "off-manifold" Shapley values can (i) give rise to incorrect explanations, (ii) hide implicit model dependence on sensitive attributes, and (iii) lead to unintelligible explanations in higher-dimensional data, on-manifold explainability overcomes these problems.

Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, Ilya Feige• 2020

Related benchmarks

TaskDatasetResultRank
Conditional Shapley value estimationAbalone cont (M=7)
MSE1.244
44
Conditional Shapley value estimationWine M=11
MSEv0.17
44
Conditional Shapley value estimationDiabetes M=10
MSEv0.154
43
Conditional Shapley value estimationAbalone M=8 (all)
MSEv1.32
39
Conditional Shapley value estimationAdult M=14
MSEv0.085
27
Showing 5 of 5 rows

Other info

Follow for update