Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InstructPix2Pix: Learning to Follow Image Editing Instructions

About

We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.

Tim Brooks, Aleksander Holynski, Alexei A. Efros• 2022

Related benchmarks

TaskDatasetResultRank
Composed Image RetrievalCIRR (test)
Recall@14.07
481
Composed Image RetrievalCIRCO (test)
mAP@102.1
234
Image EditingImgEdit-Bench
Overall Score1.88
132
Image EditingPIE-Bench
PSNR20.82
116
Image EditingGEdit-Bench English
G_O (Overall Quality)3.68
73
Image EditingKRIS-Bench
Factual Knowledge Score23.33
65
Instructive image editingEMU Edit (test)
CLIP Image Similarity0.857
46
Image EditingPIE-Bench (test)
PSNR20.8
46
Image EditingGEdit-Bench
Semantic Consistency3.58
46
Instruction-based Image EditingImgEdit Bench 1.0 (test)
Add Score2.45
37
Showing 10 of 193 rows
...

Other info

Code

Follow for update