Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers

About

Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process. However, achieving faithful attributions for the entirety of a black-box transformer model and maintaining computational efficiency is an unsolved challenge. By extending the Layer-wise Relevance Propagation attribution method to handle attention layers, we address these challenges effectively. While partial solutions exist, our method is the first to faithfully and holistically attribute not only input but also latent representations of transformer models with the computational efficiency similar to a single backward pass. Through extensive evaluations against existing methods on LLaMa 2, Mixtral 8x7b, Flan-T5 and vision transformer architectures, we demonstrate that our proposed approach surpasses alternative methods in terms of faithfulness and enables the understanding of latent representations, opening up the door for concept-based explanations. We provide an LRP library at https://github.com/rachtibat/LRP-eXplains-Transformers.

Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek• 2024

Related benchmarks

TaskDatasetResultRank
LocalizationImageNet-1k (val)--
79
Extractive QASQuAD v2
TGS51.7
10
Image ClassificationCIFAR-10
ABPC146
4
Text ClassificationIMDB
ABPC3.73
4
Image ClassificationImageNette 320
ABPC1.69
4
Causal LMWikipedia
ABPC3.16
4
Showing 6 of 6 rows

Other info

Follow for update