Learning 3D Particle-based Simulators from RGB-D Videos
About
Realistic simulation is critical for applications ranging from robotics to animation. Traditional analytic simulators sometimes struggle to capture sufficiently realistic simulation which can lead to problems including the well known "sim-to-real" gap in robotics. Learned simulators have emerged as an alternative for better capturing real-world physical dynamics, but require access to privileged ground truth physics information such as precise object geometry or particle tracks. Here we propose a method for learning simulators directly from observations. Visual Particle Dynamics (VPD) jointly learns a latent particle-based representation of 3D scenes, a neural simulator of the latent particle dynamics, and a renderer that can produce images of the scene from arbitrary views. VPD learns end to end from posed RGB-D videos and does not require access to privileged information. Unlike existing 2D video prediction models, we show that VPD's 3D structure enables scene editing and long-term predictions. These results pave the way for downstream applications ranging from video editing to robotic planning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Dynamic Scene Rendering | MPM Dynamic Scenes 1.0 (test) | PSNR30.52 | 25 | |
| Particle Dynamics Prediction | MPM-based Dynamics Dataset Bear (test) | CD3.41 | 5 | |
| Particle Dynamics Prediction | MPM-based Dynamics Dataset SandFall (test) | CD1.99 | 5 | |
| Particle Dynamics Prediction | MPM-based Dynamics Dataset Plasticine (test) | Chamfer Distance16.96 | 5 | |
| Particle Dynamics Prediction | MPM-based Dynamics Dataset Multi-Objs (test) | Chamfer Distance14.26 | 5 | |
| Particle Dynamics Prediction | MPM-based Dynamics Dataset FluidR (test) | CD3.22 | 5 |