Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation

About

Training a language model to be both helpful and harmless requires careful calibration of refusal behaviours: Models should refuse to follow malicious instructions or give harmful advice (e.g."how do I kill someone?"), but they should not refuse safe requests, even if they superficially resemble unsafe ones (e.g. "how do I kill a Python process?"). Avoiding such false refusal, as prior work has shown, is challenging even for highly-capable language models. In this paper, we propose a simple and surgical method for mitigating false refusal in language models via single vector ablation. For a given model, we extract a false refusal vector and show that ablating this vector reduces false refusal rate while preserving the model's safety and general capabilities. We also show that our approach can be used for fine-grained calibration of model safety. Our approach is training-free and model-agnostic, making it useful for mitigating the problem of false refusal in current and future language models.

Xinpeng Wang, Chengzhi Hu, Paul R\"ottger, Barbara Plank• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Easy
Accuracy80
597
Question AnsweringPIQA
Accuracy81
374
Multiple-choice Question AnsweringMMLU
Accuracy71
185
Question AnsweringARC Challenge
Normalized Accuracy56
86
Safety EvaluationToxigen
Safety98
77
General CapabilityMMLU
MMLU Accuracy74.7
73
Language UnderstandingMMLU
MMLU Score62
70
Refusal Rate EvaluationOK (test)--
56
False Refusal EvaluationORB-H
CR95.8
35
Safety PerformanceJBB
Refusal Score (CR)51
35
Showing 10 of 30 rows

Other info

Follow for update