Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment

About

Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental trade-off -- reducing jailbreak increases over-refusal and vice versa. We identify the root cause: LLMs encode the decision to answer (answer vector $v_a$) and the judgment of input safety (benign vector $v_b$) as nearly orthogonal directions, treating them as independent processes. We propose LLM-VA, which aligns $v_a$ with $v_b$ through closed-form weight updates, making the model's willingness to answer causally dependent on its safety assessment -- without fine-tuning or architectural changes. Our method identifies vectors at each layer using SVMs, selects safety-relevant layers, and iteratively aligns vectors via minimum-norm weight modifications. Experiments on 12 LLMs demonstrate that LLM-VA achieves 11.45% higher F1 than the best baseline while preserving 95.92% utility, and automatically adapts to each model's safety bias without manual tuning. Code and models are available at https://hotbento.github.io/LLM-VA-Web/.

Haonan Zhang, Dongxia Wang, Yi Liu, Kexin Chen, Wenhai Wang• 2026

Related benchmarks

TaskDatasetResultRank
Over-refusal evaluationORFuzzSet
ORR16
72
Safety-Utility Trade-off EvaluationS-Eval, ORFuzzSet, and NQ Aggregated
F1 Score86.81
72
Over-refusal evaluationNQ (Natural Questions)
ORR0.00e+0
72
Jailbreak Attack EvaluationS-Eval Aattack
Attack Success Rate (ASR)70
72
Safety Risk EvaluationS-Eval (Risk)
ASR6
72
Showing 5 of 5 rows

Other info

Follow for update