Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Programming Refusal with Conditional Activation Steering

About

LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging. Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selective responses are essential, such as content moderation or domain-specific assistants. In this paper, we propose Conditional Activation Steering (CAST), which analyzes LLM activation patterns during inference to selectively apply or withhold activation steering based on the input context. Our method is based on the observation that different categories of prompts activate distinct patterns in the model's hidden states. Using CAST, one can systematically control LLM behavior with rules like "if input is about hate speech or adult content, then refuse" or "if input is not about legal advice, then refuse." This allows for selective modification of responses to specific content while maintaining normal responses to other content, all without requiring weight optimization. We release an open-source implementation of our framework at github.com/IBM/activation-steering .

Bruce W. Lee, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Erik Miehling, Pierre Dognin, Manish Nagireddy, Amit Dhurandhar• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy71.5
1891
Language ModelingWikiText
PPL11.12
732
Multitask Language UnderstandingMMLU
Accuracy74.27
413
Massive Multitask Language UnderstandingMMLU
Accuracy61.9
117
General CapabilityMMLU
MMLU Accuracy74
73
Safety EvaluationMM-Safety
ASR18.81
57
Safety RefusalAdvBench
Refusal Rate95.7
46
Safety PerformanceJBB
Refusal Score (CR)53
35
False Refusal EvaluationORB-H
CR96.2
35
Toxicity MitigationToxTET
ToxTET Rate14.07
33
Showing 10 of 23 rows

Other info

Follow for update