Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BatchTopK Sparse Autoencoders

About

Sparse autoencoders (SAEs) have emerged as a powerful tool for interpreting language model activations by decomposing them into sparse, interpretable features. A popular approach is the TopK SAE, that uses a fixed number of the most active latents per sample to reconstruct the model activations. We introduce BatchTopK SAEs, a training method that improves upon TopK SAEs by relaxing the top-k constraint to the batch-level, allowing for a variable number of latents to be active per sample. As a result, BatchTopK adaptively allocates more or fewer latents depending on the sample, improving reconstruction without sacrificing average sparsity. We show that BatchTopK SAEs consistently outperform TopK SAEs in reconstructing activations from GPT-2 Small and Gemma 2 2B, and achieve comparable performance to state-of-the-art JumpReLU SAEs. However, an advantage of BatchTopK is that the average number of latents can be directly specified, rather than approximately tuned through a costly hyperparameter sweep. We provide code for training and evaluating BatchTopK SAEs at https://github.com/bartbussmann/BatchTopK

Bart Bussmann, Patrick Leask, Neel Nanda• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100
Accuracy27.7
302
Sparse Autoencoder Concept AlignmentCUB
Sparsity0.988
18
Activation ReconstructionPythia model activations
Pearson Correlation Coefficient0.7037
18
Concept Component AnalysisConcept Component Analysis Evaluation Set (test)
Pearson Correlation (MPC)0.6926
18
Image ClassificationImageNet
Accuracy17.8
6
Showing 5 of 5 rows

Other info

Follow for update