Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TROLL: Trust Regions improve Reinforcement Learning for Large Language Models

About

Reinforcement Learning (RL) with PPO-like clip objectives has become the standard choice for reward-based fine-tuning of large language models (LLMs). Although recent work has explored improved estimators of advantages and normalization, the clipping mechanism itself has remained untouched. Originally introduced as a proxy for principled KL-based trust regions, clipping is a crude approximation that often causes unstable updates and suboptimal performance. We replace the clip objective with a novel discrete differentiable trust region projection, which provides principled token-level KL constraints. The projection operates on a sparse subset of the model's most important token logits to balance computational cost and projection effectiveness. Our approach, Trust Region Optimization for Large Language models (TROLL), serves as a direct replacement for PPO-like clipping during training and does not alter the model's inference behavior. Across mathematical reasoning and code generation tasks, model families, as well as advantage-estimation methods, TROLL consistently outperforms PPO-like clipping in terms of training speed, stability, and final success rates.

Philipp Becker, Niklas Freymuth, Serge Thilges, Fabian Otto, Gerhard Neumann• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical Problem SolvingMATH eval (test)
Success Rate59.1
20
Mathematical ReasoningDAPO (train)
Success Rate74.4
20
Mathematical ReasoningDAPO (test)
Success Rate72.8
20
Showing 3 of 3 rows

Other info

Follow for update