Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Future Policy Approximation for Offline Reinforcement Learning Improves Mathematical Reasoning

About

Reinforcement Learning (RL) has emerged as the key driver for post-training complex reasoning in Large Language Models (LLMs), yet online RL introduces significant instability and computational overhead. Offline RL offers a compelling alternative by decoupling inference from training; however, offline algorithms for reasoning remain under-optimized compared to their online counterparts. A central challenge is gradient entanglement: in long-horizon reasoning trajectories, correct and incorrect solutions share substantial token overlap, causing gradient updates from incorrect trajectories to suppress tokens critical for correct ones. We propose Future Policy Approximation (FPA), a simple method that weights gradients against an estimate of the future policy rather than the current one, enabling proactive gradient reweighting. This future policy is estimated via logit-space extrapolation with negligible overhead. We provide theoretical intuition for FPA through the lens of Optimistic Mirror Descent and further ground it through its connection to DPO. Evaluating FPA across three models and seven mathematical benchmarks, we demonstrate consistent improvements over strong offline baselines including DPO, RPO, KTO, and vanilla offline RL. FPA stabilizes long-horizon training where vanilla objectives degrade and achieves comparable accuracy to online RLVR at a fraction of its GPU hours.

Minjae Oh, Yunho Choi, Dongmin Choi, Yohan Jo• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH 500
Pass@1 Rate74.5
76
Mathematical ReasoningAMC
Pass@1 Accuracy42.8
61
Mathematical ReasoningAIME
Pass@112.7
44
Mathematical ReasoningMATH-P
Pass@148.4
24
Showing 4 of 4 rows

Other info

Follow for update