Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners

About

Recent advances in language modeling and vision stem from training large models on diverse, multi-task data. This paradigm has had limited impact in value-based reinforcement learning (RL), where improvements are often driven by small models trained in a single-task context. This is because in multi-task RL sparse rewards and gradient conflicts make optimization of temporal difference brittle. Practical workflows for generalist policies therefore avoid online training, instead cloning expert trajectories or distilling collections of single-task policies into one agent. In this work, we show that the use of high-capacity value models trained via cross-entropy and conditioned on learnable task embeddings addresses the problem of task interference in online RL, allowing for robust and scalable multi-task training. We test our approach on 7 multi-task benchmarks with over 280 unique tasks, spanning high degree-of-freedom humanoid control and discrete vision-based RL. We find that, despite its simplicity, the proposed approach leads to state-of-the-art single and multi-task performance, as well as sample-efficient transfer to new tasks.

Michal Nauman, Marek Cygan, Carmelo Sferrazza, Aviral Kumar, Pieter Abbeel• 2025

Related benchmarks

TaskDatasetResultRank
h1hand-balance hardHumanoidBench Hard 1M
Score72.918
5
h1hand-balance simpleHumanoidBench Hard 1M
Score101.3
5
h1hand-sit hardHumanoidBench Hard 1M
Score805.5
5
h1hand-sit simpleHumanoidBench Hard 1M
Score926.8
5
Humanoid ControlHumanoidBench Medium 1M
Standing Score841.2
5
h1hand-crawlHumanoidBench Hard 1M
Score849.2
5
h1hand-hurdleHumanoidBench Hard 1M
Score108.1
5
h1hand-mazeHumanoidBench Hard 1M
Score358.8
5
h1hand-poleHumanoidBench Hard 1M
Score512.9
5
h1hand-reachHumanoidBench Hard 1M
Score3.90e+3
5
Showing 10 of 15 rows

Other info

Follow for update