Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning a Generative Meta-Model of LLM Activations

About

Existing approaches for analyzing neural network activations, such as PCA and sparse autoencoders, rely on strong structural assumptions. Generative models offer an alternative: they can uncover structure without such assumptions and act as priors that improve intervention fidelity. We explore this direction by training diffusion models on one billion residual stream activations, creating "meta-models" that learn the distribution of a network's internal states. We find that diffusion loss decreases smoothly with compute and reliably predicts downstream utility. In particular, applying the meta-model's learned prior to steering interventions improves fluency, with larger gains as loss decreases. Moreover, the meta-model's neurons increasingly isolate concepts into individual units, with sparse probing scores that scale as loss decreases. These results suggest generative meta-models offer a scalable path toward interpretability without restrictive structural assumptions. Project page: https://generative-latent-prior.github.io.

Grace Luo, Jiahai Feng, Trevor Darrell, Alec Radford, Jacob Steinhardt• 2026

Related benchmarks

TaskDatasetResultRank
1-D Probing113 binary classification tasks 2025 (test)
Probe AUC87
8
Probing113 binary tasks (val)
Probe AUC94
8
Showing 2 of 2 rows

Other info

Follow for update