Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Control Reinforcement Learning: Interpretable Token-Level Steering of LLMs via Sparse Autoencoder Features

About

Sparse autoencoders (SAEs) decompose language model activations into interpretable features, but existing methods reveal only which features activate, not which change model outputs when amplified. We introduce Control Reinforcement Learning (CRL), which trains a policy to select SAE features for steering at each token, producing interpretable intervention logs: the learned policy identifies features that change model outputs when amplified. Adaptive Feature Masking encourages diverse feature discovery while preserving singlefeature interpretability. The framework yields new analysis capabilities: branch point tracking locates tokens where feature choice determines output correctness; critic trajectory analysis separates policy limitations from value estimation errors; layer-wise comparison reveals syntactic features in early layers and semantic features in later layers. On Gemma 2 2B across MMLU, BBQ, GSM8K, HarmBench, and XSTest, CRL achieves improvements while providing per-token intervention logs. These results establish learned feature steering as a mechanistic interpretability tool that complements static feature analysis with dynamic intervention probes

Seonglae Cho, Zekun Wu, Adriano Koshiyama• 2026

Related benchmarks

TaskDatasetResultRank
Multiple-choice Question AnsweringMMLU
Accuracy55.37
148
Multiple-choice Question AnsweringMMLU-Pro
MMLU-Pro Overall Accuracy30.49
116
Over-refusalXSTest--
42
Bias QABBQ Ambig
Accuracy85.04
4
Adversarial safetyHarmBench
Accuracy49.12
2
Bias QABBQ Disambig
Accuracy84.85
2
Math ReasoningGSM8K
Accuracy55.65
2
Short-form QASimpleQA
Accuracy4
2
Bias EvaluationBBQ Disambiguated--
1
Multiple-choice Question AnsweringMMLU--
1
Showing 10 of 13 rows

Other info

Follow for update