Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Self-Interpretation from Interpretability Artifacts: Training Lightweight Adapters on Vector-Label Pairs

About

Self-interpretation methods prompt language models to describe their own internal states, but remain unreliable due to hyperparameter sensitivity. We show that training lightweight adapters on interpretability artifacts, while keeping the LM entirely frozen, yields reliable self-interpretation across tasks and model families. A scalar affine adapter with just $d_\text{model}+1$ parameters suffices: trained adapters generate sparse autoencoder feature labels that outperform the training labels themselves (71% vs 63% generation scoring at 70B scale), identify topics with 94% recall@1 versus 1% for untrained baselines, and decode bridge entities in multi-hop reasoning that appear in neither prompt nor response, surfacing implicit reasoning without chain-of-thought. The learned bias vector alone accounts for 85% of improvement, and simpler adapters generalize better than more expressive alternatives. Controlling for model knowledge via prompted descriptions, we find self-interpretation gains outpace capability gains from 7B to 72B parameters. Our results demonstrate that self-interpretation improves with scale, without modifying the model being interpreted.

Keenan Pepper, Alex McKenzie, Florin Pop, Stijn Servaes, Martin Leitgab, Mike Vaiana, Judd Rosenblatt, Michael S. A. Graziano, Diogo de Lucena• 2026

Related benchmarks

TaskDatasetResultRank
SAE latent interpretationLlama Scope SAEs (test)
Hit Rate0.501
9
SAE latent interpretationGoodfire SAEs (test)
Hit Rate62.8
9
Generation scoringGemma Scope SAE latents
Hit Rate0.425
9
Embedding retrievalWikipedia Topics (test)
R@19.37e+3
7
SAE label evaluationGoodfire SAE latents (held-out)
Hit Rate71.4
5
Topic RetrievalWikipedia
R@152.6
4
Showing 6 of 6 rows

Other info

Follow for update