Class-agnostic Reconstruction of Dynamic Objects from Videos
About
We introduce REDO, a class-agnostic framework to REconstruct the Dynamic Objects from RGBD or calibrated videos. Compared to prior work, our problem setting is more realistic yet more challenging for three reasons: 1) due to occlusion or camera settings an object of interest may never be entirely visible, but we aim to reconstruct the complete shape; 2) we aim to handle different object dynamics including rigid motion, non-rigid motion, and articulation; 3) we aim to reconstruct different categories of objects with one unified framework. To address these challenges, we develop two novel modules. First, we introduce a canonical 4D implicit function which is pixel-aligned with aggregated temporal visual cues. Second, we develop a 4D transformation module which captures object dynamics to support temporal propagation and aggregation. We study the efficacy of REDO in extensive experiments on synthetic RGBD video datasets SAIL-VOS 3D and DeformingThings4D++, and on real-world video data 3DPW. We find REDO outperforms state-of-the-art dynamic reconstruction methods by a margin. In ablation studies we validate each developed component.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 4D Reconstruction | SAIL-VOS 3D and DeformingThings4D++ (unseen categories (dog, gorilla, puma)) | mIoU38.5 | 5 | |
| Dynamic 3D Reconstruction | SAIL-VOS 3D | mIoU31.9 | 5 | |
| Dynamic 3D Reconstruction | DeformingThings4D++ | mIoU0.574 | 5 | |
| Dynamic 3D Reconstruction | 3DPW | mIoU0.416 | 4 |