Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Advancing General-Purpose Reasoning Models with Modular Gradient Surgery

About

Reinforcement learning (RL) has played a central role in recent advances in large reasoning models (LRMs), yielding strong gains in verifiable and open-ended reasoning. However, training a single general-purpose LRM across diverse domains remains challenging due to pronounced domain heterogeneity. Through a systematic study of two widely used strategies, Sequential RL and Mixed RL, we find that both incur substantial cross-domain interference at the behavioral and gradient levels, resulting in limited overall gains. To address these challenges, we introduce **M**odular **G**radient **S**urgery (**MGS**), which resolves gradient conflicts at the module level within the transformer. When applied to Llama and Qwen models, MGS achieves average improvements of 4.3 (16.6\%) and 4.5 (11.1\%) points, respectively, over standard multi-task RL across three representative domains (math, general chat, and instruction following). Further analysis demonstrates that MGS remains effective under prolonged training. Overall, our study clarifies the sources of interference in multi-domain RL and presents an effective solution for training general-purpose LRMs.

Min Cai, Yu Liang, Longzheng Wang, Yan Wang, Yueyang Zhang, Long Xia, Zhiyuan Sun, Xi Ye, Daiting Shi• 2026

Related benchmarks

TaskDatasetResultRank
GeneralizationGen
Gen Score35.7
8
ChatChat
Chat Score48.2
8
Instruction FollowingIF
IF Score33.7
8
MathMATH
Math Score58.8
8
Multi-task model alignment and mixingMath, Chat, IF, and General QA tasks Llama-3.1-8B (test)
Math Accuracy36
3
Showing 5 of 5 rows

Other info

Follow for update