InstantDrag: Improving Interactivity in Drag-based Image Editing
About
Drag-based image editing has recently gained popularity for its interactivity and precision. However, despite the ability of text-to-image models to generate samples within a second, drag editing still lags behind due to the challenge of accurately reflecting user interaction while maintaining image content. Some existing approaches rely on computationally intensive per-image optimization or intricate guidance-based methods, requiring additional inputs such as masks for movable regions and text prompts, thereby compromising the interactivity of the editing process. We introduce InstantDrag, an optimization-free pipeline that enhances interactivity and speed, requiring only an image and a drag instruction as input. InstantDrag consists of two carefully designed networks: a drag-conditioned optical flow generator (FlowGen) and an optical flow-conditioned diffusion model (FlowDiffusion). InstantDrag learns motion dynamics for drag-based image editing in real-world video datasets by decomposing the task into motion generation and motion-conditioned image generation. We demonstrate InstantDrag's capability to perform fast, photo-realistic edits without masks or text prompts through experiments on facial video datasets and general scenes. These results highlight the efficiency of our approach in handling drag-based image editing, making it a promising solution for interactive, real-time applications.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| image drag-editing | DragBench DR (averages) | Prep + Edit Time (s)1.2 | 10 | |
| Drag-based Image Editing | ReD Bench | IFbg93 | 10 | |
| Drag-based Image Editing | DragBench DR | IF (Background)94.4 | 10 | |
| Drag-style image editing | FaceForensics++ (test) | FID56.48 | 9 | |
| Drag-style image editing | TED-talks (test) | FID64.35 | 9 |