Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

XAI for Transformers: Better Explanations through Conservative Propagation

About

Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gradient information, have been proposed. We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction. We identify Attention Heads and LayerNorm as main reasons for such unreliable explanations and propose a more stable way for propagation through these layers. Our proposal, which can be seen as a proper extension of the well-established LRP method to Transformers, is shown both theoretically and empirically to overcome the deficiency of a simple gradient-based approach, and achieves state-of-the-art explanation performance on a broad range of Transformer models and datasets.

Ameen Ali, Thomas Schnake, Oliver Eberle, Gr\'egoire Montavon, Klaus-Robert M\"uller, Lior Wolf• 2022

Related benchmarks

TaskDatasetResultRank
Explanation FaithfulnessMed-BIOS
Delta AF5.305
24
Explanation FaithfulnessEmotion
Delta AF Score5.199
24
Explanation FaithfulnessSNLI
Delta AF0.731
24
Explanation FaithfulnessSST-2
Delta AF0.82
24
Perturbation TestImageNet (val)
Neg Score33.24
18
Perturbation TestImageNet (test)
AOPC0.614
18
Activation TaskIMDB
AUAC93.9
9
Activation TaskSST-2
AUAC90.8
9
PruningIMDB (test)
AU-MSE0.65
9
PruningSST-2 (test)
AU-MSE1.56
9
Showing 10 of 28 rows

Other info

Code

Follow for update