LAMP: Lift Image-Editing as General 3D Priors for Open-world Manipulation
About
Human-like generalization in open-world remains a fundamental challenge for robotic manipulation. Existing learning-based methods, including reinforcement learning, imitation learning, and vision-language-action-models (VLAs), often struggle with novel tasks and unseen environments. Another promising direction is to explore generalizable representations that capture fine-grained spatial and geometric relations for open-world manipulation. While large-language-model (LLMs) and vision-language-model (VLMs) provide strong semantic reasoning based on language or annotated 2D representations, their limited 3D awareness restricts their applicability to fine-grained manipulation. To address this, we propose LAMP, which lifts image-editing as 3D priors to extract inter-object 3D transformations as continuous, geometry-aware representations. Our key insight is that image-editing inherently encodes rich 2D spatial cues, and lifting these implicit cues into 3D transformations provides fine-grained and accurate guidance for open-world manipulation. Extensive experiments demonstrate that \codename delivers precise 3D transformations and achieves strong zero-shot generalization in open-world manipulation. Project page: https://zju3dv.github.io/LAMP/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Drawer Closing | Real-world robot manipulation | Success Rate7 | 6 | |
| Lid covering | Real-world manipulation | Success Rate80 | 4 | |
| Object-centric manipulation | Real-world 10 object-centric tasks | Egg Placing Success Rate60 | 4 | |
| Articulated Object Manipulation | Real-world 3 articulated-object tasks | Drawer Opening Success Rate60 | 3 | |
| Pencil Insertion | Real-world manipulation | Success Rate7 | 2 | |
| Ring stacking | Real-world manipulation | Success Rate0.8 | 2 |