Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Improving Dictionary Learning with Gated Sparse Autoencoders

About

Recent work has found that sparse autoencoders (SAEs) are an effective technique for unsupervised discovery of interpretable features in language models' (LMs) activations, by finding sparse, linear reconstructions of LM activations. We introduce the Gated Sparse Autoencoder (Gated SAE), which achieves a Pareto improvement over training with prevailing methods. In SAEs, the L1 penalty used to encourage sparsity introduces many undesirable biases, such as shrinkage -- systematic underestimation of feature activations. The key insight of Gated SAEs is to separate the functionality of (a) determining which directions to use and (b) estimating the magnitudes of those directions: this enables us to apply the L1 penalty only to the former, limiting the scope of undesirable side effects. Through training SAEs on LMs of up to 7B parameters we find that, in typical hyper-parameter ranges, Gated SAEs solve shrinkage, are similarly interpretable, and require half as many firing features to achieve comparable reconstruction fidelity.

Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Tom Lieberum, Vikrant Varma, J\'anos Kram\'ar, Rohin Shah, Neel Nanda• 2024

Related benchmarks

TaskDatasetResultRank
Concept IdentifiabilityBias-in-Bios
MCC0.7773
12
Concept IdentifiabilityLANG (1, 1)
MCC0.8219
12
Concept IdentifiabilityBINARY (2, 2)
MCC0.8015
12
Concept IdentifiabilityTruthfulQA
MCC0.7128
12
Concept IdentifiabilityCORR (2, 1)
MCC54.9
12
Concept RecoverySycophancy
Mean MCC0.4511
6
Concept RecoveryRefusal
Mean MCC0.4381
6
Concept IdentifiabilityRefusal
MCC0.5699
6
Concept IdentifiabilityGENDER (1, 1)
MCC81.83
6
Concept IdentifiabilitySycophancy
MCC0.5177
6
Showing 10 of 11 rows

Other info

Follow for update