Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Representer Point Selection for Explaining Deep Neural Networks

About

We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction. Specifically, we show that we can decompose the pre-activation prediction of a neural network into a linear combination of activations of training points, with the weights corresponding to what we call representer values, which thus capture the importance of that training point on the learned parameters of the network. But it provides a deeper understanding of the network than simply training point influence: with positive representer values corresponding to excitatory training points, and negative values corresponding to inhibitory points, which as we show provides considerably more insight. Our method is also much more scalable, allowing for real-time feedback in a manner not feasible with influence functions.

Chih-Kuan Yeh, Joon Sik Kim, Ian E.H. Yen, Pradeep Ravikumar• 2018

Related benchmarks

TaskDatasetResultRank
Case Deletion DiagnosticsMNIST binary subsample (test)
AUC-DEL Score2.51
11
High-value data removalCIFAR10 binarized (test)
AUC (Data Elimination Impact)1.65
11
Case Deletion DiagnosticsToxicity binary subsample (test)
AUC-DEL0.37
10
Case Deletion DiagnosticsAGnews binary subsample (test)
AUC-DEL0.86
10
News ClassificationAG News subset targeted BERT-small (test)
AUC-DEL Plus-0.016
7
Text ClassificationToxicity BERT-small targeted Kaggle 2018 (test)
AUC-DEL+-0.008
7
Text ClassificationToxicity Nooverlap BERT-small
AUC-DEL Plus-0.008
7
Text ClassificationToxicity Kaggle targeted 2018 RoBERTa (test)
AUC-DEL+-0.004
7
Showing 8 of 8 rows

Other info

Follow for update