Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Chasing Moving Targets with Online Self-Play Reinforcement Learning for Safer Language Models

About

Conventional language model (LM) safety alignment relies on a reactive, disjoint procedure: attackers exploit a static model, followed by defensive fine-tuning to patch exposed vulnerabilities. This sequential approach creates a mismatch -- attackers overfit to obsolete defenses, while defenders perpetually lag behind emerging threats. To address this, we propose Self-RedTeam, an online self-play reinforcement learning algorithm where an attacker and defender agent co-evolve through continuous interaction. We cast safety alignment as a two-player zero-sum game, where a single model alternates between attacker and defender roles -- generating adversarial prompts and safeguarding against them -- while a reward LM adjudicates outcomes. This enables dynamic co-adaptation. Grounded in the game-theoretic framework of zero-sum games, we establish a theoretical safety guarantee which motivates the design of our method: if self-play converges to a Nash Equilibrium, the defender will reliably produce safe responses to any adversarial input. Empirically, Self-RedTeam uncovers more diverse attacks (+21.8% SBERT) compared to attackers trained against static defenders and achieves higher robustness on safety benchmarks (e.g., +65.5% on WildJailBreak) than defenders trained against static attackers. We further propose hidden Chain-of-Thought, allowing agents to plan privately, which boosts adversarial diversity and reduces over-refusals. Our results motivate a shift from reactive patching to proactive co-evolution in LM safety training, enabling scalable, autonomous, and robust self-improvement of LMs via multi-agent reinforcement learning (MARL).

Mickel Liu, Liwei Jiang, Yancheng Liang, Simon Shaolei Du, Yejin Choi, Tim Althoff, Natasha Jaques• 2025

Related benchmarks

TaskDatasetResultRank
General CapabilityMMLU
MMLU Accuracy70.2
73
Safety PerformanceJBB
Refusal Score (CR)27
35
False Refusal EvaluationORB-H
CR54.2
35
Harmful Prompt RefusalHarmBench
ASR20.7
7
Harmful RefusalWG (test)
ASR13.8
7
Harmful RefusalDAN
ASR54.2
7
Benign ComplianceXSTest
Comply Score96.8
7
Harmful RefusalWJB
ASR24
7
Showing 8 of 8 rows

Other info

Follow for update