Conflicts Make Large Reasoning Models Vulnerable to Attacks
About
Large Reasoning Models (LRMs) have achieved remarkable performance across diverse domains, yet their decision-making under conflicting objectives remains insufficiently understood. This work investigates how LRMs respond to harmful queries when confronted with two categories of conflicts: internal conflicts that pit alignment values against each other and dilemmas, which impose mutually contradictory choices, including sacrificial, duress, agent-centered, and social forms. Using over 1,300 prompts across five benchmarks, we evaluate three representative LRMs - Llama-3.1-Nemotron-8B, QwQ-32B, and DeepSeek R1 - and find that conflicts significantly increase attack success rates, even under single-round non-narrative queries without sophisticated auto-attack techniques. Our findings reveal through layerwise and neuron-level analyses that safety-related and functional representations shift and overlap under conflict, interfering with safety-aligned behavior. This study highlights the need for deeper alignment strategies to ensure the robustness and trustworthiness of next-generation reasoning models. Our code is available at https://github.com/DataArcTech/ConflictHarm. Warning: This paper contains inappropriate, offensive and harmful content.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Safety Evaluation | AdvBench | -- | 117 | |
| Safety Evaluation | StrongREJECT | Attack Success Rate6 | 65 | |
| Safety Evaluation | HarmBench | ASR23.5 | 42 | |
| Jailbreak Attack Evaluation | Five Safety Benchmarks AdvBench, HarmBench, HarmfulQ, JBBench, StrongReject | ASR7.69 | 6 | |
| Safety Evaluation | HarmfulQ | ASR1.5 | 6 | |
| Safety Evaluation | JBBench | ASR13 | 6 | |
| Jailbreak | HarmfulQ | ASR18 | 3 | |
| Safety Evaluation | Five Safety Benchmarks direct_q | -- | 3 | |
| Safety Evaluation | Five Safety Benchmarks inner | -- | 3 | |
| Safety Evaluation | Five Safety Benchmarks dilemma | -- | 3 |