TL-GRPO: Turn-Level RL for Reasoning-Guided Iterative Optimization
About
Large language models have demonstrated strong reasoning capabilities in complex tasks through tool integration, which is typically framed as a Markov Decision Process and optimized with trajectory-level RL algorithms such as GRPO. However, a common class of reasoning tasks, iterative optimization, presents distinct challenges: the agent interacts with the same underlying environment state across turns, and the value of a trajectory is determined by the best turn-level reward rather than cumulative returns. Existing GRPO-based methods cannot perform fine-grained, turn-level optimization in such settings, while black-box optimization methods discard prior knowledge and reasoning capabilities. To address this gap, we propose Turn-Level GRPO (TL-GRPO), a lightweight RL algorithm that performs turn-level group sampling for fine-grained optimization. We evaluate TL-GRPO on analog circuit sizing (ACS), a challenging scientific optimization task requiring multiple simulations and domain expertise. Results show that TL-GRPO outperforms standard GRPO and Bayesian optimization methods across various specifications. Furthermore, our 30B model trained with TL-GRPO achieves state-of-the-art performance on ACS tasks under same simulation budget, demonstrating both strong generalization and practical utility.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Analog Circuit Sizing | ACS In-domain 8 trained tasks (train) | ACCIA Score0.94 | 12 | |
| Analog Circuit Sizing | Analog Circuit Sizing (ACS) Out-of-domain (unseen tasks) | OPAMP 1 Error0.2 | 12 |