Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer

About

Instruction-based image editing enables precise modifications via natural language prompts, but existing methods face a precision-efficiency tradeoff: fine-tuning demands massive datasets (>10M) and computational resources, while training-free approaches suffer from weak instruction comprehension. We address this by proposing ICEdit, which leverages the inherent comprehension and generation abilities of large-scale Diffusion Transformers (DiTs) through three key innovations: (1) An in-context editing paradigm without architectural modifications; (2) Minimal parameter-efficient fine-tuning for quality improvement; (3) Early Filter Inference-Time Scaling, which uses VLMs to select high-quality noise samples for efficiency. Experiments show that ICEdit achieves state-of-the-art editing performance with only 0.1\% of the training data and 1\% trainable parameters compared to previous methods. Our approach establishes a new paradigm for balancing precision and efficiency in instructional image editing. Codes and demos can be found in https://river-zhang.github.io/ICEdit-gh-pages/.

Zechuan Zhang, Ji Xie, Yu Lu, Zongxin Yang, Yi Yang• 2025

Related benchmarks

TaskDatasetResultRank
Image EditingImgEdit-Bench
Overall Score3.05
191
DehazingSOTS--
154
Image EditingGEdit-Bench
Semantic Consistency4.94
92
Image EditingKRIS-Bench
Factual Knowledge Score0.4699
74
Single-image editingGEdit EN (full)
BG Change2.73
42
Instruction-based Image EditingImgEdit Bench 1.0 (test)
Add Score3.58
37
Instructive image editingMagicBrush (test)
CLIP Image0.8703
37
Image EditingImgEdit
ImgEdit3.05
31
Image EditingGEdit-EN
GEdit-EN Score4.84
27
Multi-turn image editingMSE-Bench
Success Rate (Turn 1)63.3
26
Showing 10 of 78 rows
...

Other info

Follow for update