A$^2$-Edit: Precise Reference-Guided Image Editing of Arbitrary Objects and Ambiguous Masks
About
We propose A^2-Edit, a unified inpainting framework for arbitrary object categories, which allows users to replace any target region with a reference object using only a coarse mask. To address the issues of severe homogenization and limited category coverage in existing datasets, we construct a large-scale multi-category dataset, UniEdit-500K, which includes 8 major categories, 209 fine-grained subcategories, and a total of 500,104 image pairs. Such rich category diversity poses new challenges for the model, requiring it to automatically learn semantic relationships and distinctions across categories. To this end, we introduce the Mixture of Transformer module, which performs differentiated modeling of various object categories through dynamic expert selection, and further enhances cross-category semantic transfer and generalization through collaboration among experts. In addition, we propose a Mask Annealing Training Strategy (MATS) that progressively relaxes mask precision during training, reducing the model's reliance on accurate masks and improving robustness across diverse editing tasks. Extensive experiments on benchmarks such as VITON-HD and AnyInsertion demonstrate that A^2-Edit consistently outperforms existing approaches across all metrics, providing a new and efficient solution for arbitrary object editing.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reference-Guided Image Editing | UniEdit (test) | DINO-I62.28 | 14 | |
| Reference-Guided Image Editing | VTON-HD Fine Mask (test) | DINO-I64.07 | 7 | |
| Reference-Guided Image Editing | VTON-HD Rough Mask (test) | DINO-I Score63.79 | 7 | |
| Reference-Guided Image Editing | AnyInsertion Fine Mask (test) | DINO-I61.67 | 7 | |
| Reference-Guided Image Editing | AnyInsertion Rough Mask (test) | DINO-I61.73 | 7 |