Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SmoothGrad: removing noise by adding noise

About

Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.

Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\'egas, Martin Wattenberg• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy90.68
882
Image ClassificationSVHN (test)
Accuracy62.35
362
ExplainabilityImageNet (val)
Insertion44.5
104
LocalizationImageNet-1k (val)--
79
Feature Relevance EvaluationImageNet (test)
R (Feature Relevance)0.38
60
Attribution FidelityImageNet 1,000 images (val)
µFidelity0.23
48
DeletionImageNet 2,000 images (val)
Deletion Score0.13
48
Feature Attribution EvaluationImageNet standard (val)
AUC78.1
39
Explainable AI EvaluationPhotobombing
Area Coverage36.47
26
Explanation FaithfulnessSST-2
Delta AF-0.675
24
Showing 10 of 72 rows
...

Other info

Follow for update