Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos

About

Large Reasoning Models (LRMs) have significantly advanced beyond traditional Large Language Models (LLMs) with their exceptional logical reasoning capabilities, yet these improvements introduce heightened safety risks. When subjected to jailbreak attacks, their ability to generate more targeted and organized content can lead to greater harm. Although some studies claim that reasoning enables safer LRMs against existing LLM attacks, they overlook the inherent flaws within the reasoning process itself. To address this gap, we propose the first jailbreak attack targeting LRMs, exploiting their unique vulnerabilities stemming from the advanced reasoning capabilities. Specifically, we introduce a Chaos Machine, a novel component to transform attack prompts with diverse one-to-one mappings. The chaos mappings iteratively generated by the machine are embedded into the reasoning chain, which strengthens the variability and complexity and also promotes a more robust attack. Based on this, we construct the Mousetrap framework, which makes attacks projected into nonlinear-like low sample spaces with mismatched generalization enhanced. Also, due to the more competing objectives, LRMs gradually maintain the inertia of unpredictable iterative reasoning and fall into our trap. Success rates of the Mousetrap attacking o1-mini, Claude-Sonnet and Gemini-Thinking are as high as 96%, 86% and 98% respectively on our toxic dataset Trotter. On benchmarks such as AdvBench, StrongREJECT, and HarmBench, attacking Claude-Sonnet, well-known for its safety, Mousetrap can astonishingly achieve success rates of 87.5%, 86.58% and 93.13% respectively. Attention: This paper contains inappropriate, offensive and harmful content.

Yang Yao, Xuan Tong, Ruofan Wang, Yixu Wang, Lujundong Li, Liang Liu, Yan Teng, Yingchun Wang• 2025

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench--
376
Jailbreak AttackStrongREJECT--
88
Jailbreak AttackMaliciousInstruct--
35
Jailbreak AttackRedTeam 2K--
16
Jailbreak AttackTrotterStr
Success Count @ k=135
12
Jailbreak AttackFigStep--
12
Jailbreak AttackJailBenchSeed en
Success Rate @148.15
1
Jailbreak AttackAdvBench
Success Rate @113.27
1
Jailbreak AttackHADES
Success Rate @124
1
Jailbreak AttackMMSafety ILL
Success Rate @15.15
1
Showing 10 of 14 rows

Other info

Follow for update