Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Joint Continual Learning of Local Language Models and Cloud Offloading Decisions with Budget Constraints

About

Locally deployed Small Language Models (SLMs) must continually support diverse tasks under strict memory and computation constraints, making selective reliance on cloud Large Language Models (LLMs) unavoidable. Regulating cloud assistance during continual learning is challenging, as naive reward-based reinforcement learning often yields unstable offloading behavior and exacerbates catastrophic forgetting as task distributions shift. We propose DA-GRPO, a dual-advantage extension of Group Relative Policy Optimization that incorporates cloud-usage constraints directly into advantage computation, avoiding fixed reward shaping and external routing models. This design enables the local model to jointly learn task competence and collaboration behavior, allowing cloud requests to emerge naturally during post-training while respecting a prescribed assistance budget. Experiments on mathematical reasoning and code generation benchmarks show that DA-GRPO improves post-switch accuracy, substantially reduces forgetting, and maintains stable cloud usage compared to prior collaborative and routing-based approaches.

Evan Chen, Wenzhi Fang, Shiqiang Wang, Christopher Brinton• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationTACO Verified
During-task Accuracy84.8
29
Language UnderstandingMMLU
During-task Accuracy69.5
29
Math ReasoningMATH lighteval
During-task Accuracy84.5
29
Mathematical ReasoningMATH 500
During-task Accuracy (MATH 500)87.2
29
Showing 4 of 4 rows

Other info

Follow for update