EditP23: 3D Editing via Propagation of Image Prompts to Multi-View
About
We present EditP23, a method for mask-free 3D editing that propagates 2D image edits to multi-view representations in a 3D-consistent manner. In contrast to traditional approaches that rely on text-based prompting or explicit spatial masks, EditP23 enables intuitive edits by conditioning on a pair of images: an original view and its user-edited counterpart. These image prompts are used to guide an edit-aware flow in the latent space of a pre-trained multi-view diffusion model, allowing the edit to be coherently propagated across views. Our method operates in a feed-forward manner, without optimization, and preserves the identity of the original object, in both structure and appearance. We demonstrate its effectiveness across a range of object categories and editing scenarios, achieving high fidelity to the source while requiring no manual masks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Editing | 3D Editing | Time (s)50.91 | 7 | |
| 3D Scene Editing | Eval3DEdit | Action Change (CLIPimg)0.5312 | 7 | |
| 3D Editing | Eval3DEdit (test) | Action Change (Uni3Dpc)0.1272 | 7 | |
| Image-guided 3D shape editing | BenchUp | Condition Alignment SSIM0.759 | 3 |