Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Interpretable Explanations of Black Boxes by Meaningful Perturbation

About

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks "look" in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.

Ruth Fong, Andrea Vedaldi• 2017

Related benchmarks

TaskDatasetResultRank
Feature Relevance EvaluationImageNet (test)
R (Feature Relevance)0.4
60
Saliency Map LocalizationILSVRC 2012 (val)
Proportion56.1
8
Object Recognition FaithfulnessImageNet ILSVRC-2012 (val)
Avg Drop63.5
5
XAI Faithfulness EvaluationMIMII bandsaw 1.0 (test)
Spearman Correlation0.69
4
XAI Faithfulness EvaluationMIMII bearing 1.0 (test)
Spearman Correlation0.889
4
XAI Faithfulness EvaluationMIMII fan 1.0 (test)
Spearman Correlation0.976
4
XAI Faithfulness EvaluationMIMII gearbox 1.0 (test)
Spearman Correlation0.595
4
XAI Faithfulness EvaluationMIMII shaker 1.0 (test)
Spearman Correlation0.974
4
XAI Faithfulness EvaluationMIMII slider 1.0 (test)
Spearman Correlation0.943
4
XAI Faithfulness EvaluationToyADMOS ToyTank 1.0 (test)
Spearman Correlation0.921
4
Showing 10 of 18 rows

Other info

Follow for update