Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse Autoencoders

About

Sparse autoencoders (SAEs) are a promising unsupervised approach for identifying causally relevant and interpretable linear features in a language model's (LM) activations. To be useful for downstream tasks, SAEs need to decompose LM activations faithfully; yet to be interpretable the decomposition must be sparse -- two objectives that are in tension. In this paper, we introduce JumpReLU SAEs, which achieve state-of-the-art reconstruction fidelity at a given sparsity level on Gemma 2 9B activations, compared to other recent advances such as Gated and TopK SAEs. We also show that this improvement does not come at the cost of interpretability through manual and automated interpretability studies. JumpReLU SAEs are a simple modification of vanilla (ReLU) SAEs -- where we replace the ReLU with a discontinuous JumpReLU activation function -- and are similarly efficient to train and run. By utilising straight-through-estimators (STEs) in a principled manner, we show how it is possible to train JumpReLU SAEs effectively despite the discontinuous JumpReLU function introduced in the SAE's forward pass. Similarly, we use STEs to directly train L0 to be sparse, instead of training on proxies such as L1, avoiding problems like shrinkage.

Senthooran Rajamanoharan, Tom Lieberum, Nicolas Sonnerat, Arthur Conmy, Vikrant Varma, J\'anos Kram\'ar, Neel Nanda• 2024

Related benchmarks

TaskDatasetResultRank
Sparse Autoencoder Concept AlignmentCUB
Sparsity0.98
18
Concept Extraction ConsistencyIMDB
MPPC99.6
14
Concept Extraction ConsistencyCoNLL
MPPC0.536
14
Concept Extraction ConsistencyWikiArt
MPPC44
14
Concept Extraction ConsistencyImageNet
MPPC0.341
14
Downstream Utility EvaluationLLM Activations
Sparse Probing Accuracy87.9
8
Hierarchical Feature AlignmentLLM Activations
Absorption98.8
8
Feature InterpretabilityLLM Activations
AutoInterp Score86.1
8
Sparse ReconstructionLLM Activations
L049.4
8
Concept Extraction ConsistencyAudioSet
MPPC44.9
7
Showing 10 of 14 rows

Other info

Follow for update