Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Hessian-Enhanced Token Attribution (HETA): Interpreting Autoregressive LLMs

About

Attribution methods seek to explain language model predictions by quantifying the contribution of input tokens to generated outputs. However, most existing techniques are designed for encoder-based architectures and rely on linear approximations that fail to capture the causal and semantic complexities of autoregressive generation in decoder-only models. To address these limitations, we propose Hessian-Enhanced Token Attribution (HETA), a novel attribution framework tailored for decoder-only language models. HETA combines three complementary components: a semantic transition vector that captures token-to-token influence across layers, Hessian-based sensitivity scores that model second-order effects, and KL divergence to measure information loss when tokens are masked. This unified design produces context-aware, causally faithful, and semantically grounded attributions. Additionally, we introduce a curated benchmark dataset for systematically evaluating attribution quality in generative settings. Empirical evaluations across multiple models and datasets demonstrate that HETA consistently outperforms existing methods in attribution faithfulness and alignment with human annotations, establishing a new standard for interpretability in autoregressive language models.

Vishal Pramanik, Maisha Maliha, Nathaniel D. Bastian, Sumit Kumar Jha• 2026

Related benchmarks

TaskDatasetResultRank
Faithfulness EvaluationTellMeWhy
AUC π-Soft-NS2.25
67
Faithfulness EvaluationWikiBio
AUC π-Soft-NS2.3
67
Attribution AlignmentCurated Attribution Dataset (NarrativeQA + SciQ)
DSA (Dependent Sentence Attribution)5.1
40
Attribution FaithfulnessLongRA
Soft-NC Score10.8
40
Causal AttributionCausal and Downstream Robustness Ablation Suite Averaged over LLaMA-3.1 70B, Phi-3 14B, GPT-J 6B, Qwen2.5 3B
Causal Pass@586
14
Decoding StabilityCausal and Downstream Robustness Ablation Suite Averaged over 4 models
Decoding Δ%0.8
14
Fact CheckingCausal and Downstream Robustness Ablation Suite Averaged over 4 models
Fact EMΔ3.7
14
Span ExtractionCausal and Downstream Robustness Ablation Suite
Span F181
14
Tool UseCausal and Downstream Robustness Ablation Suite Averaged over 4 models
Tool Hit@1Δ4.1
14
Attribution Faithfulness EvaluationLongRA
MoRF44
6
Showing 10 of 10 rows

Other info

Follow for update