Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Steering Llama 2 via Contrastive Activation Addition

About

We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying their activations during forward passes. CAA computes "steering vectors" by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior, such as factual versus hallucinatory responses. During inference, these steering vectors are added at all token positions after the user's prompt with either a positive or negative coefficient, allowing precise control over the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2 Chat using multiple-choice behavioral question datasets and open-ended generation tasks. We demonstrate that CAA significantly alters model behavior, is effective over and on top of traditional methods like finetuning and system prompt design, and minimally reduces capabilities. Moreover, we gain deeper insights into CAA's mechanisms by employing various activation space interpretation methods. CAA accurately steers model outputs and sheds light on how high-level concepts are represented in Large Language Models (LLMs).

Nina Panickssery, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, Alexander Matt Turner• 2023

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU--
842
Mathematical ReasoningGSM8K (test)
Accuracy90.06
797
Language UnderstandingMMLU
Accuracy65.7
756
Language ModelingWikiText
PPL11.12
479
Code GenerationHumanEval (test)
Pass@169.51
444
Mathematical ReasoningMATH500 (test)
Accuracy52.56
381
Code GenerationMBPP (test)
Pass@155.2
276
Question AnsweringBoolQ
Accuracy74.98
240
Multitask Language UnderstandingMMLU
Accuracy73.05
206
Language UnderstandingMMLU (test)
MMLU Average Accuracy71.88
136
Showing 10 of 58 rows

Other info

Follow for update