Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sanity Checks for Saliency Maps

About

Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings.

Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim• 2018

Related benchmarks

TaskDatasetResultRank
Domain GeneralizationVLCS
Accuracy74
238
Domain GeneralizationPACS--
221
Domain GeneralizationOfficeHome
Accuracy81
182
ClassificationCivilComments (test)
Worst-case Accuracy56.5
47
Domain GeneralizationVLCS DomainBed (test)
Average OOD Accuracy77.4
27
Explainable AI EvaluationPhotobombing
Area Coverage60.25
26
XAI EvaluationECSSD
Area0.6741
16
Domain GeneralizationOfficeHome DomainBed (OOD)
Avg OOD Accuracy67.5
16
Domain GeneralizationPACS OOD (test)
Average Accuracy87.9
13
Causal ExplanationsECSSD
Area0.0836
9
Showing 10 of 34 rows

Other info

Follow for update