Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data

About

We study instancewise feature importance scoring as a method for model interpretation. Any such method yields, for each predicted instance, a vector of importance scores associated with the feature vector. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions of this kind, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of the Shapley value and prevents these methods from being scalable to large data sets and complex models. We focus on settings in which the data have a graph structure, and the contribution of features to the target variable is well-approximated by a graph-structured factorization. In such settings, we develop two algorithms with linear complexity for instancewise feature importance scoring. We establish the relationship of our methods to the Shapley value and another closely related concept known as the Myerson value from cooperative game theory. We demonstrate on both language and image data that our algorithms compare favorably with other methods for model interpretation.

Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan• 2018

Related benchmarks

TaskDatasetResultRank
Faithfulness EvaluationAG-News
Rate of Label Changes20
12
Faithfulness EvaluationSST-2
Rate of Label Changes30
12
Faithfulness EvaluationIMDB
Rate of Label Changes36
12
Showing 3 of 3 rows

Other info

Follow for update