Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Explain: Supervised Token Attribution from Transformer Attention Patterns

About

Explainable AI (XAI) has become critical as transformer-based models are deployed in high-stakes applications including healthcare, legal systems, and financial services, where opacity hinders trust and accountability. Transformers self-attention mechanisms have proven valuable for model interpretability, with attention weights successfully used to understand model focus and behavior (Xu et al., 2015); (Wiegreffe and Pinter, 2019). However, existing attention-based explanation methods rely on manually defined aggregation strategies and fixed attribution rules (Abnar and Zuidema, 2020a); (Chefer et al., 2021), while model-agnostic approaches (LIME, SHAP) treat the model as a black box and incur significant computational costs through input perturbation. We introduce Explanation Network (ExpNet), a lightweight neural network that learns an explicit mapping from transformer attention patterns to token-level importance scores. Unlike prior methods, ExpNet discovers optimal attention feature combinations automatically rather than relying on predetermined rules. We evaluate ExpNet in a challenging cross-task setting and benchmark it against a broad spectrum of model-agnostic methods and attention-based techniques spanning four methodological families.

George Mihaila• 2026

Related benchmarks

TaskDatasetResultRank
Grammatical AcceptabilityCoLA (held-out)
F1 Score46.8
14
Hate Speech DetectionHateXplain (held-out)
F1 Score47.3
14
Sentiment AnalysisSST-2 (held-out)
F1 Score39.8
14
Rationale GenerationSST-2, CoLA, and HateXplain (test)
Throughput (ex/s)13.889
13
Showing 4 of 4 rows

Other info

Follow for update