Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Understanding Black-box Predictions via Influence Functions

About

How can we explain the predictions of a black-box model? In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.

Pang Wei Koh, Percy Liang• 2017

Related benchmarks

TaskDatasetResultRank
Graph ClassificationMUTAG
Accuracy88.1
697
Image ClassificationImageNet (test)--
235
Graph Classificationogbg-molpcba (test)
AP27.2
206
Image ClassificationSUN397 (test)
Top-1 Accuracy37.82
136
Graph ClassificationOGBG-MOLHIV v1 (test)
ROC-AUC0.774
88
Image ClassificationFlowers (test)
Accuracy51.7
87
Graph ClassificationDHFR
Accuracy75.4
80
Graph ClassificationOGBG-MOLPCBA v1 (test)
AP26.6
77
Influence EstimationBenchmarks Budgets k=1, 5, 10, 25 (Aggregated)
AUC (SR, dB)-7.46
66
Image ClassificationCars (test)
Accuracy55.64
57
Showing 10 of 44 rows

Other info

Follow for update