Control Reinforcement Learning: Interpretable Token-Level Steering of LLMs via Sparse Autoencoder Features
About
Sparse autoencoders (SAEs) decompose language model activations into interpretable features, but existing methods reveal only which features activate, not which change model outputs when amplified. We introduce Control Reinforcement Learning (CRL), which trains a policy to select SAE features for steering at each token, producing interpretable intervention logs: the learned policy identifies features that change model outputs when amplified. Adaptive Feature Masking encourages diverse feature discovery while preserving singlefeature interpretability. The framework yields new analysis capabilities: branch point tracking locates tokens where feature choice determines output correctness; critic trajectory analysis separates policy limitations from value estimation errors; layer-wise comparison reveals syntactic features in early layers and semantic features in later layers. On Gemma 2 2B across MMLU, BBQ, GSM8K, HarmBench, and XSTest, CRL achieves improvements while providing per-token intervention logs. These results establish learned feature steering as a mechanistic interpretability tool that complements static feature analysis with dynamic intervention probes
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multiple-choice Question Answering | MMLU | Accuracy55.37 | 148 | |
| Multiple-choice Question Answering | MMLU-Pro | MMLU-Pro Overall Accuracy30.49 | 116 | |
| Over-refusal | XSTest | -- | 42 | |
| Bias QA | BBQ Ambig | Accuracy85.04 | 4 | |
| Adversarial safety | HarmBench | Accuracy49.12 | 2 | |
| Bias QA | BBQ Disambig | Accuracy84.85 | 2 | |
| Math Reasoning | GSM8K | Accuracy55.65 | 2 | |
| Short-form QA | SimpleQA | Accuracy4 | 2 | |
| Bias Evaluation | BBQ Disambiguated | -- | 1 | |
| Multiple-choice Question Answering | MMLU | -- | 1 |