View-Invariant Policy Learning via Zero-Shot Novel View Synthesis
About
Large-scale visuomotor policy learning is a promising approach toward developing generalizable manipulation systems. Yet, policies that can be deployed on diverse embodiments, environments, and observational modalities remain elusive. In this work, we investigate how knowledge from large-scale visual data of the world may be used to address one axis of variation for generalizable manipulation: observational viewpoint. Specifically, we study single-image novel view synthesis models, which learn 3D-aware scene-level priors by rendering images of the same scene from alternate camera viewpoints given a single input image. For practical application to diverse robotic data, these models must operate zero-shot, performing view synthesis on unseen tasks and environments. We empirically analyze view synthesis models within a simple data-augmentation scheme that we call View Synthesis Augmentation (VISTA) to understand their capabilities for learning viewpoint-invariant policies from single-viewpoint demonstration data. Upon evaluating the robustness of policies trained with our method to out-of-distribution camera viewpoints, we find that they outperform baselines in both simulated and real-world manipulation tasks. Videos and additional visualizations are available at https://s-tian.github.io/projects/vista.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| PickPlace | RLBench Sim2sim Cross-Embodiment - PickPlace | Success Rate45.33 | 6 | |
| Coffee | RLBench Sim2sim Shared Object - Coffee | Success Rate40.67 | 6 | |
| Hammer | RLBench Sim2sim Unseen Object - Hammer | Success Rate56 | 6 | |
| Nut Asm. | RLBench Sim2sim Cross-Embodiment - Nut Asm. | Success Rate28.67 | 6 | |
| Stack | RLBench Sim2sim Shared Object - Stack | Success Rate66.67 | 6 | |
| Threading | RLBench Sim2sim Unseen Object - Threading | Success Rate28 | 6 |