Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Deceive with Attention-Based Explanations

About

Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful both for providing insights to practitioners and for explaining why a model makes its decisions to stakeholders. We call the latter use of attention mechanisms into question by demonstrating a simple method for training models to produce deceptive attention masks. Our method diminishes the total weight assigned to designated impermissible tokens, even when the models can be shown to nevertheless rely on these features to drive predictions. Across multiple models and tasks, our approach manipulates attention weights while paying surprisingly little cost in accuracy. Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender. Consequently, our results cast doubt on attention's reliability as a tool for auditing algorithms in the context of fairness and accountability.

Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, Zachary C. Lipton• 2019

Related benchmarks

TaskDatasetResultRank
Faithfulness MeasurementMHC
BLEU68.5
18
Faithfulness MeasurementAlpaca
BLEU0.547
12
Faithfulness Measurementtldr_news
BLEU76.7
12
Faithfulness EvaluationAlpaca 800 samples
BLEU54.6
5
Faithfulness Evaluationtldr_news 800 samples
BLEU76.1
5
Showing 5 of 5 rows

Other info

Follow for update