Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PubSwap: Public-Data Off-Policy Coordination for Federated RLVR

About

Reasoning post-training with reinforcement learning from verifiable rewards (RLVR) is typically studied in centralized settings, yet many realistic applications involve decentralized private data distributed across organizations. Federated training is a natural solution, but scaling RLVR in this regime is challenging: full-model synchronization is expensive, and performing many local steps can cause severe client drift under heterogeneous data. We propose a federated RLVR framework that combines LoRA-based local adaptation with public-data-based off-policy steps to improve both communication efficiency and cross-client coordination. In particular, a small shared public dataset is used to periodically exchange and reuse response-level training signals across organizations, providing a lightweight anchor toward a more globally aligned objective without exposing private data. Our method selectively replaces locally incorrect responses with globally correct ones during public-data steps, thereby keeping training closer to the local policy while still benefiting from cross-client coordination. Across mathematical and medical reasoning benchmarks and models, our method consistently improves over standard baselines. Our results highlight a simple and effective recipe for federated reasoning post-training: combining low-rank communication with limited public-data coordination.

Anupam Nayak, Baris Askin, Muhammed Ustaomeroglu, Carlee Joe-Wong, Gauri Joshi• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningDeepMath
Pass@170.5
44
Mathematical ReasoningDeepMath 2025 (test)
Pass@155.8
32
Mathematical ReasoningMATH 2021 (test)
pass@177
32
Medical ReasoningMedQA and MedMCQA mixture
Pass@159.4
12
Medical ReasoningMedical Reasoning
pass@159.5
12
Showing 5 of 5 rows

Other info

Follow for update