Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2

About

Sparse autoencoders (SAEs) are an unsupervised method for learning a sparse decomposition of a neural network's latent representations into seemingly interpretable features. Despite recent excitement about their potential, research applications outside of industry are limited by the high cost of training a comprehensive suite of SAEs. In this work, we introduce Gemma Scope, an open suite of JumpReLU SAEs trained on all layers and sub-layers of Gemma 2 2B and 9B and select layers of Gemma 2 27B base models. We primarily train SAEs on the Gemma 2 pre-trained models, but additionally release SAEs trained on instruction-tuned Gemma 2 9B for comparison. We evaluate the quality of each SAE on standard metrics and release these results. We hope that by releasing these SAE weights, we can help make more ambitious safety and interpretability research easier for the community. Weights and a tutorial can be found at https://huggingface.co/google/gemma-scope and an interactive demo can be found at https://www.neuronpedia.org/gemma-scope

Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, J\'anos Kram\'ar, Anca Dragan, Rohin Shah, Neel Nanda• 2024

Related benchmarks

TaskDatasetResultRank
Concept IdentifiabilityLANG (1, 1)
MCC0.8614
12
Concept IdentifiabilityBINARY (2, 2)
MCC0.8387
12
Concept IdentifiabilityTruthfulQA
MCC0.737
12
Concept IdentifiabilityCORR (2, 1)
MCC57.6
12
Concept IdentifiabilityBias-in-Bios
MCC0.7385
12
Concept IdentifiabilityGENDER (1, 1)
MCC85.42
6
Concept IdentifiabilitySycophancy
MCC0.6344
6
Concept IdentifiabilityRefusal
MCC0.6119
6
Showing 8 of 8 rows

Other info

Follow for update