Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsafer in Many Turns: Benchmarking and Defending Multi-Turn Safety Risks in Tool-Using Agents

About

LLM-based agents are becoming increasingly capable, yet their safety lags behind. This creates a gap between what agents can do and should do. This gap widens as agents engage in multi-turn interactions and employ diverse tools, introducing new risks overlooked by existing benchmarks. To systematically scale safety testing into multi-turn, tool-realistic settings, we propose a principled taxonomy that transforms single-turn harmful tasks into multi-turn attack sequences. Using this taxonomy, we construct MT-AgentRisk (Multi-Turn Agent Risk Benchmark), the first benchmark to evaluate multi-turn tool-using agent safety. Our experiments reveal substantial safety degradation: the Attack Success Rate (ASR) increases by 16% on average across open and closed models in multi-turn settings. To close this gap, we propose ToolShield, a training-free, tool-agnostic, self-exploration defense: when encountering a new tool, the agent autonomously generates test cases, executes them to observe downstream effects, and distills safety experiences for deployment. Experiments show that ToolShield effectively reduces ASR by 30% on average in multi-turn interactions. Our code is available at https://github.com/CHATS-lab/ToolShield.

Xu Li, Simon Yu, Minzhou Pan, Yiyou Sun, Bo Li, Dawn Song, Xue Lin, Weiyan Shi• 2026

Related benchmarks

TaskDatasetResultRank
Tool-Using Agent SafetyMT-AgentRisk 1.0 (test)--
12
Multi-turn Safety Risk AssessmentMulti-turn Safety 100 random sampled tasks
ASR0.15
8
Multi-turn Safety Risk AssessmentFilesystem
ASR64
8
Multi-turn Safety Risk AssessmentPlaywright tasks
ASR60
8
Multi-turn Safety Risk AssessmentTerminal tasks
ASR44
8
Multi-turn Safety Risk AssessmentPostgreSQL
ASR0.6
8
Showing 6 of 6 rows

Other info

Follow for update