Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

From Refusal Tokens to Refusal Control: Discovering and Steering Category-Specific Refusal Directions

About

Language models are commonly fine-tuned for safety alignment to refuse harmful prompts. One approach fine-tunes them to generate categorical refusal tokens that distinguish different refusal types before responding. In this work, we leverage a version of Llama 3 8B fine-tuned with these categorical refusal tokens to enable inference-time control over fine-grained refusal behavior, improving both safety and reliability. We show that refusal token fine-tuning induces separable, category-aligned directions in the residual stream, which we extract and use to construct categorical steering vectors with a lightweight probe that determines whether to steer toward or away from refusal during inference. In addition, we introduce a learned low-rank combination that mixes these category directions in a whitened, orthonormal steering basis, resulting in a single controllable intervention under activation-space anisotropy, and show that this intervention is transferable across same-architecture model variants without additional training. Across benchmarks, both categorical steering vectors and the low-rank combination consistently reduce over-refusals on benign prompts while increasing refusal rates on harmful prompts, highlighting their utility for multi-category refusal control.

Rishab Alagharu, Ishneet Sukhvinder Singh, Shaibi Shamsudeen, Zhen Wu, Ashwinee Panda• 2026

Related benchmarks

TaskDatasetResultRank
Over-refusalWildjailbreak (Benign)
Wildjailbreak Benign Refusal Rate1.43
49
Safety RefusalAdvBench
Refusal Rate99.42
46
Overrefusal evaluationOrBench-H
RR77.1
21
Over-refusal evaluationCoCoNot Contrast
Over-refusal Rate1.58
7
Over-refusal evaluationWildGuard Unharmful
Over-refusal Rate1.06
7
Over-refusal evaluationXSTest Safe
Over-refusal Rate3.6
7
Refusal EvaluationCoCoNot Orig
Refusal Rate96.1
7
Refusal EvaluationWildGuard Harmful
Refusal Rate84.35
7
Refusal EvaluationWildJailbreak Adversarial Harmful
Refusal Rate89.45
7
Refusal EvaluationOR-Bench Toxic
Refusal Rate94.66
7
Showing 10 of 14 rows

Other info

Follow for update