SmoothGrad: removing noise by adding noise
About
Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\'egas, Martin Wattenberg• 2017
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | MNIST (test) | Accuracy90.68 | 882 | |
| Image Classification | SVHN (test) | Accuracy62.35 | 362 | |
| Explainability | ImageNet (val) | Insertion44.5 | 104 | |
| Localization | ImageNet-1k (val) | -- | 79 | |
| Feature Relevance Evaluation | ImageNet (test) | R (Feature Relevance)0.38 | 60 | |
| Attribution Fidelity | ImageNet 1,000 images (val) | µFidelity0.23 | 48 | |
| Deletion | ImageNet 2,000 images (val) | Deletion Score0.13 | 48 | |
| Feature Attribution Evaluation | ImageNet standard (val) | AUC78.1 | 39 | |
| Explainable AI Evaluation | Photobombing | Area Coverage36.47 | 26 | |
| Explanation Faithfulness | SST-2 | Delta AF-0.675 | 24 |
Showing 10 of 72 rows
...