Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RelatIF: Identifying Explanatory Training Examples via Relative Influence

About

In this work, we focus on the use of influence functions to identify relevant training examples that one might hope "explain" the predictions of a machine learning model. One shortcoming of influence functions is that the training examples deemed most "influential" are often outliers or mislabelled, making them poor choices for explanation. In order to address this shortcoming, we separate the role of global versus local influence. We introduce RelatIF, a new class of criteria for choosing relevant training examples by way of an optimization objective that places a constraint on global influence. RelatIF considers the local influence that an explanatory example has on a prediction relative to its global effects on the model. In empirical evaluations, we find that the examples returned by RelatIF are more intuitive when compared to those found using influence functions.

Elnaz Barshan, Marc-Etienne Brunet, Gintare Karolina Dziugaite• 2020

Related benchmarks

TaskDatasetResultRank
Influence EstimationBenchmarks Budgets k=1, 5, 10, 25 (Aggregated)
AUC (SR, dB)38.16
66
Contributor AttributionFashion Product
Diversity8.51
48
Contributor AttributionArtBench Post-Impressionism
Aesthetic Score9.57
36
Contributor AttributionCIFAR-20
Inception Score23.88
32
End model evaluationYouTube
Test Loss0.24
22
End model evaluationDN clipart
Test Loss0.704
22
End model evaluationPW
Test Loss0.276
22
End model evaluationSpambase
Test Loss0.284
22
End model evaluationCensus
Test Loss0.372
22
End model evaluationIMDB
Test Loss0.501
22
Showing 10 of 20 rows

Other info

Follow for update