Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Attribution-Guided Decoding

About

The capacity of Large Language Models (LLMs) to follow complex instructions and generate factually accurate text is critical for their real-world application. However, standard decoding methods often fail to robustly satisfy these requirements, while existing control techniques frequently degrade general output quality. In this work, we introduce Attribution-Guided Decoding (AGD), an interpretability-based decoding strategy. Instead of directly manipulating model activations, AGD considers a set of high-probability output token candidates and selects the one that exhibits the highest attribution to a user-defined Region of Interest (ROI). This ROI can be flexibly defined over different parts of the model's input or internal components, allowing AGD to steer generation towards various desirable behaviors. We demonstrate AGD's efficacy across three challenging domains. For instruction following, we show that AGD significantly boosts adherence (e.g., improving the overall success rate on Llama 3.1 from 66.0% to 79.1%). For knowledge-intensive tasks, we show that guiding generation towards usage of internal knowledge components or contextual sources can reduce hallucinations and improve factual accuracy in both closed-book and open-book settings. Furthermore, we propose an adaptive, entropy-based variant of AGD that mitigates quality degradation and reduces computational overhead by applying guidance only when the model is uncertain. Our work presents a versatile, more interpretable, and effective method for enhancing the reliability of modern LLMs.

Piotr Komorowski, Elena Golimblevskaia, Reduan Achtibat, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringHotpotQA
Recall89.5
42
Question AnsweringTriviaQA
Recall (%)92.3
36
Question AnsweringNQ
NQ Recall (%)90.6
36
Instruction FollowingIHEval
PLA86.7
21
Multi-turn Instruction FollowingSysBench
CSR74.3
21
Question AnswerTriviaQA Open-book
Recall91.4
5
Question AnswerNatural Questions (NQ) Open-book
Recall87.9
5
Question AnswerHotPotQA (HPQA) distractor Open-book
Recall87.9
5
Closed-book Question AnsweringHotpotQA
Recall39.6
4
Question AnswerTriviaQA Closed-book
Recall82.4
4
Showing 10 of 11 rows

Other info

Follow for update