Discern Truth from Falsehood: Reducing Over-Refusal via Contrastive Refinement
About
Large language models (LLMs) aligned for safety often suffer from over-refusal, the tendency to reject seemingly toxic or benign prompts by misclassifying them as toxic. This behavior undermines models' helpfulness and restricts usability in sensitive or nuanced contexts. While prior work has proposed mitigation strategies such as data augmentation and activation steering, these approaches often face a trade-off: reducing over-refusal typically degrades the model's ability to reject genuinely harmful content. We argue that this issue arises from the ambiguous influence of toxic and seemingly toxic prompts on the model's learning dynamics. To address it, we introduce a preceding alignment stage, DCR: Discernment via Contrastive Refinement. Both theoretically and empirically, we demonstrate that contrastive refinement improves an LLM's capacity to distinguish truly toxic prompts from superficially toxic ones. Evaluation across diverse benchmarks shows that our method effectively reduces over-refusal while preserving the safety benefits of alignment. Importantly, it achieves this with minimal degradation of general capabilities, offering a more principled and robust direction for safety alignment.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | ARC Easy | Accuracy83 | 597 | |
| Question Answering | PIQA | Accuracy79 | 374 | |
| Multiple-choice Question Answering | MMLU | Accuracy70 | 185 | |
| Question Answering | ARC Challenge | Normalized Accuracy59 | 86 | |
| Refusal Evaluation | XSTest Seemingly Toxic Subsets | XS98 | 15 | |
| Response Generation Quality | General Response Quality Set | Quality Score51.8 | 15 | |
| Safety Evaluation | XSTest Toxic | Safety94 | 15 | |
| Question Answering | OpenBookQA | OpQA Score44 | 15 | |
| Over-refusal Compliance | XS (test) | Compliance Rate (Keyword Filter)98 | 5 | |
| Over-refusal Compliance | CoCo Seemingly Toxic | Compliance Rate (Keyword Filter)98 | 5 |