Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RISE: Randomized Input Sampling for Explanation of Black-box Models

About

Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/

Vitali Petsiuk, Abir Das, Kate Saenko• 2018

Related benchmarks

TaskDatasetResultRank
ExplainabilityImageNet (val)
Insertion72.67
104
Attribution FidelityImageNet 1,000 images (val)
µFidelity0.182
48
DeletionImageNet 2,000 images (val)
Deletion Score0.127
48
Pointing localizationVOC 2007 (test)
Mean Accuracy (All)86.9
44
Pointing gameMSCOCO 2014 (val)
Mean Accuracy (All)54.7
42
Feature AttributionImage data 224 x 224
Avg Execution Time (s)2.82
28
Explainable AI EvaluationPhotobombing
Area Coverage28.03
26
XAI EvaluationECSSD
Area0.3687
16
Feature AttributionMS-CXR text (test)
Conf. Drop (%)1.16
13
Causal ExplanationsECSSD
Area0.1165
9
Showing 10 of 27 rows

Other info

Code

Follow for update