Task Tokens: A Flexible Approach to Adapting Behavior Foundation Models
About
Recent advancements in imitation learning have led to transformer-based behavior foundation models (BFMs) that enable multi-modal, human-like control for humanoid agents. While excelling at zero-shot generation of robust behaviors, BFMs often require meticulous prompt engineering for specific tasks, potentially yielding suboptimal results. We introduce "Task Tokens", a method to effectively tailor BFMs to specific tasks while preserving their flexibility. Our approach leverages the transformer architecture of BFMs to learn a new task-specific encoder through reinforcement learning, keeping the original BFM frozen. This allows incorporation of user-defined priors, balancing reward design and prompt engineering. By training a task encoder to map observations to tokens, used as additional BFM inputs, we guide performance improvement while maintaining the model's diverse control characteristics. We demonstrate Task Tokens' efficacy across various tasks, including out-of-distribution scenarios, and show their compatibility with other prompting modalities. Our results suggest that Task Tokens offer a promising approach for adapting BFMs to specific control tasks while retaining their generalization capabilities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Direction | Human Study | Human-likeness Win Rate99 | 6 | |
| Humanoid Direction | Isaac Gym Direction | Success Rate99.26 | 6 | |
| Humanoid Reach | Isaac Gym Reach | Success Rate94.88 | 6 | |
| Humanoid Steering | Isaac Gym Steering | Success Rate0.8869 | 6 | |
| Reach | Human Study | Human-likeness Win Rate89 | 6 | |
| Steering | Human Study | Human-likeness Win Rate93 | 6 | |
| Humanoid Long Jump | Isaac Gym Long Jump | Success Rate99.75 | 5 | |
| Humanoid Strike | Isaac Gym Strike | Success Rate76.61 | 5 | |
| Long Jump | Human Study | Human-likeness Win Rate96 | 4 | |
| Strike | Human Study | Human-likeness Win Rate85 | 4 |