Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EditP23: 3D Editing via Propagation of Image Prompts to Multi-View

About

We present EditP23, a method for mask-free 3D editing that propagates 2D image edits to multi-view representations in a 3D-consistent manner. In contrast to traditional approaches that rely on text-based prompting or explicit spatial masks, EditP23 enables intuitive edits by conditioning on a pair of images: an original view and its user-edited counterpart. These image prompts are used to guide an edit-aware flow in the latent space of a pre-trained multi-view diffusion model, allowing the edit to be coherently propagated across views. Our method operates in a feed-forward manner, without optimization, and preserves the identity of the original object, in both structure and appearance. We demonstrate its effectiveness across a range of object categories and editing scenarios, achieving high fidelity to the source while requiring no manual masks.

Roi Bar-On, Dana Cohen-Bar, Daniel Cohen-Or• 2025

Related benchmarks

TaskDatasetResultRank
3D Editing3D Editing
Time (s)50.91
7
3D Scene EditingEval3DEdit
Action Change (CLIPimg)0.5312
7
3D EditingEval3DEdit (test)
Action Change (Uni3Dpc)0.1272
7
Image-guided 3D shape editingBenchUp
Condition Alignment SSIM0.759
3
Showing 4 of 4 rows

Other info

Follow for update