In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer
About
Instruction-based image editing enables precise modifications via natural language prompts, but existing methods face a precision-efficiency tradeoff: fine-tuning demands massive datasets (>10M) and computational resources, while training-free approaches suffer from weak instruction comprehension. We address this by proposing ICEdit, which leverages the inherent comprehension and generation abilities of large-scale Diffusion Transformers (DiTs) through three key innovations: (1) An in-context editing paradigm without architectural modifications; (2) Minimal parameter-efficient fine-tuning for quality improvement; (3) Early Filter Inference-Time Scaling, which uses VLMs to select high-quality noise samples for efficiency. Experiments show that ICEdit achieves state-of-the-art editing performance with only 0.1\% of the training data and 1\% trainable parameters compared to previous methods. Our approach establishes a new paradigm for balancing precision and efficiency in instructional image editing. Codes and demos can be found in https://river-zhang.github.io/ICEdit-gh-pages/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Editing | ImgEdit-Bench | Overall Score3.05 | 191 | |
| Dehazing | SOTS | -- | 154 | |
| Image Editing | GEdit-Bench | Semantic Consistency4.94 | 92 | |
| Image Editing | KRIS-Bench | Factual Knowledge Score0.4699 | 74 | |
| Single-image editing | GEdit EN (full) | BG Change2.73 | 42 | |
| Instruction-based Image Editing | ImgEdit Bench 1.0 (test) | Add Score3.58 | 37 | |
| Instructive image editing | MagicBrush (test) | CLIP Image0.8703 | 37 | |
| Image Editing | ImgEdit | ImgEdit3.05 | 31 | |
| Image Editing | GEdit-EN | GEdit-EN Score4.84 | 27 | |
| Multi-turn image editing | MSE-Bench | Success Rate (Turn 1)63.3 | 26 |