Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LAMP: Lift Image-Editing as General 3D Priors for Open-world Manipulation

About

Human-like generalization in open-world remains a fundamental challenge for robotic manipulation. Existing learning-based methods, including reinforcement learning, imitation learning, and vision-language-action-models (VLAs), often struggle with novel tasks and unseen environments. Another promising direction is to explore generalizable representations that capture fine-grained spatial and geometric relations for open-world manipulation. While large-language-model (LLMs) and vision-language-model (VLMs) provide strong semantic reasoning based on language or annotated 2D representations, their limited 3D awareness restricts their applicability to fine-grained manipulation. To address this, we propose LAMP, which lifts image-editing as 3D priors to extract inter-object 3D transformations as continuous, geometry-aware representations. Our key insight is that image-editing inherently encodes rich 2D spatial cues, and lifting these implicit cues into 3D transformations provides fine-grained and accurate guidance for open-world manipulation. Extensive experiments demonstrate that \codename delivers precise 3D transformations and achieves strong zero-shot generalization in open-world manipulation. Project page: https://zju3dv.github.io/LAMP/.

Jingjing Wang, Zhengdong Hong, Chong Bao, Yuke Zhu, Junhan Sun, Guofeng Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Drawer ClosingReal-world robot manipulation
Success Rate7
6
Lid coveringReal-world manipulation
Success Rate80
4
Object-centric manipulationReal-world 10 object-centric tasks
Egg Placing Success Rate60
4
Articulated Object ManipulationReal-world 3 articulated-object tasks
Drawer Opening Success Rate60
3
Pencil InsertionReal-world manipulation
Success Rate7
2
Ring stackingReal-world manipulation
Success Rate0.8
2
Showing 6 of 6 rows

Other info

Follow for update