PubSwap: Public-Data Off-Policy Coordination for Federated RLVR
About
Reasoning post-training with reinforcement learning from verifiable rewards (RLVR) is typically studied in centralized settings, yet many realistic applications involve decentralized private data distributed across organizations. Federated training is a natural solution, but scaling RLVR in this regime is challenging: full-model synchronization is expensive, and performing many local steps can cause severe client drift under heterogeneous data. We propose a federated RLVR framework that combines LoRA-based local adaptation with public-data-based off-policy steps to improve both communication efficiency and cross-client coordination. In particular, a small shared public dataset is used to periodically exchange and reuse response-level training signals across organizations, providing a lightweight anchor toward a more globally aligned objective without exposing private data. Our method selectively replaces locally incorrect responses with globally correct ones during public-data steps, thereby keeping training closer to the local policy while still benefiting from cross-client coordination. Across mathematical and medical reasoning benchmarks and models, our method consistently improves over standard baselines. Our results highlight a simple and effective recipe for federated reasoning post-training: combining low-rank communication with limited public-data coordination.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | DeepMath | Pass@170.5 | 44 | |
| Mathematical Reasoning | DeepMath 2025 (test) | Pass@155.8 | 32 | |
| Mathematical Reasoning | MATH 2021 (test) | pass@177 | 32 | |
| Medical Reasoning | MedQA and MedMCQA mixture | Pass@159.4 | 12 | |
| Medical Reasoning | Medical Reasoning | pass@159.5 | 12 |