Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SCANS: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering

About

Safety alignment is indispensable for Large Language Models (LLMs) to defend threats from malicious instructions. However, recent researches reveal safety-aligned LLMs prone to reject benign queries due to the exaggerated safety issue, limiting their helpfulness. In this paper, we propose a Safety-Conscious Activation Steering (SCANS) method to mitigate the exaggerated safety concerns in aligned LLMs. First, SCANS extracts the refusal steering vectors within the activation space and utilizes vocabulary projection to anchor some specific safety-critical layers which influence model refusal behavior. Second, by tracking the hidden state transition, SCANS identifies the steering direction and steers the model behavior accordingly, achieving a balance between exaggerated safety and adequate safety. Experiments show that SCANS achieves new state-of-the-art performance on XSTest and OKTest benchmarks, without impairing their defense capability against harmful queries and maintaining almost unchanged model capability.

Zouying Cao, Yifei Yang, Hai Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Safety Risk EvaluationS-Eval (Risk)
ASR0.00e+0
72
Over-refusal evaluationORFuzzSet
ORR28
72
Safety-Utility Trade-off EvaluationS-Eval, ORFuzzSet, and NQ Aggregated
F1 Score77.42
72
Jailbreak Attack EvaluationS-Eval Aattack
Attack Success Rate (ASR)32
72
Over-refusal evaluationNQ (Natural Questions)
ORR2
72
Jailbreak Defense EvaluationALL-4
SR Score2.643
21
Jailbreak Defense EvaluationSB
Strong-Reject Score (SR)3.008
21
Jailbreak Defense EvaluationWGT
Strong-Reject Score (SR)2.519
21
Jailbreak Defense EvaluationL3J
Strong-Reject Score (SR)2.601
21
Jailbreak Defense EvaluationADVB
Strong-Reject Score (SR)2.588
21
Showing 10 of 12 rows

Other info

Follow for update