Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mitigating the Safety-utility Trade-off in LLM Alignment via Adaptive Safe Context Learning

About

While reasoning models have achieved remarkable success in complex reasoning tasks, their increasing power necessitates stringent safety measures. For safety alignment, the core challenge lies in the inherent trade-off between safety and utility. However, prevailing alignment strategies typically construct CoT training data with explicit safety rules via context distillation. This approach inadvertently limits reasoning capabilities by creating a rigid association between rule memorization and refusal. To mitigate the safety-utility trade-off, we propose the Adaptive Safe Context Learning (ASCL) framework to improve the reasoning given proper context. ASCL formulates safety alignment as a multi-turn tool-use process, empowering the model to autonomously decide when to consult safety rules and how to generate the ongoing reasoning. Furthermore, to counteract the preference for rule consultation during RL, we introduce Inverse Frequency Policy Optimization (IFPO) to rebalance advantage estimates. By decoupling rule retrieval and subsequent reasoning, our method achieves higher overall performance compared to baselines.

Yanbo Wang, Minzheng Wang, Jian Liang, Lu Wang, Yongcan Yu, Ran He• 2026

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringARC Challenge--
234
SafetySafety Evaluation Suite (Salad-Bench, WildJailbreak, JailbreakBench, WildChat, WildGuard)
Safety Rate (S.R.)100
24
Over-refusalOver-refusal Evaluation Suite (XSTest, WildJailbreak, WildGuard, OKTest, OR-Bench)
XSTest Refusal Rate (%)7.2
24
Science Question AnsweringGPQA Diamond
Avg@1 Score58.59
19
General ReasoningMATH-500, GPQA-D, MMLU-P, GSM8K, ARC-C Aggregate
Average Score82.95
18
Multi-task Knowledge and ReasoningMMLU-Pro
Average Score @170.79
18
Mathematical Word Problem SolvingGSM8K
Pass@898.27
18
Showing 7 of 7 rows

Other info

Follow for update