Compositional Multi-Object Reinforcement Learning with Linear Relation Networks
About
Although reinforcement learning has seen remarkable progress over the last years, solving robust dexterous object-manipulation tasks in multi-object settings remains a challenge. In this paper, we focus on models that can learn manipulation tasks in fixed multi-object settings and extrapolate this skill zero-shot without any drop in performance when the number of objects changes. We consider the generic task of bringing a specific cube out of a set to a goal position. We find that previous approaches, which primarily leverage attention and graph neural network-based architectures, do not generalize their skills when the number of input objects changes while scaling as $K^2$. We propose an alternative plug-and-play module based on relational inductive biases to overcome these limitations. Besides exceeding performances in their training environment, we show that our approach, which scales linearly in $K$, allows agents to extrapolate and generalize zero-shot to any new object number.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Push and Switch | OpenAI Fetch - Push and Switch 3-Push + 3-Switch (S+O) (test) | Success Rate75.1 | 18 | |
| Switch | OpenAI Fetch 3-Switch (L+O) (test) | Success Rate85.7 | 9 | |
| Push and Switch | OpenAI Fetch - Push and Switch 2-Push + 2-Switch (S) (test) | Success Rate81.3 | 9 | |
| Object Comparison | Spriteworld | Success Rate83.1 | 9 | |
| Object Goal | Spriteworld (train) | Success Rate84.6 | 9 | |
| Object Goal | Spriteworld | Success Rate74.9 | 9 | |
| Object Goal | Spriteworld unseen object numbers (test) | Success Rate79.1 | 9 | |
| Object Interaction | Spriteworld | Success Rate76 | 9 | |
| Switch | OpenAI Fetch Switch 2-Switch (L) (test) | Success Rate90.7 | 9 | |
| Object Comparison | Spriteworld unseen object numbers (test) | Avg Success Rate80 | 9 |