Fine-Tuning InstructPix2Pix for Advanced Image Colorization
About
This paper presents a novel approach to human image colorization by fine-tuning the InstructPix2Pix model, which integrates a language model (GPT-3) with a text-to-image model (Stable Diffusion). Despite the original InstructPix2Pix model's proficiency in editing images based on textual instructions, it exhibits limitations in the focused domain of colorization. To address this, we fine-tuned the model using the IMDB-WIKI dataset, pairing black-and-white images with a diverse set of colorization prompts generated by ChatGPT. This paper contributes by (1) applying fine-tuning techniques to stable diffusion models specifically for colorization tasks, and (2) employing generative models to create varied conditioning prompts. After finetuning, our model outperforms the original InstructPix2Pix model on multiple metrics quantitatively, and we produce more realistically colored images qualitatively. The code for this project is provided on the GitHub Repository https://github.com/AllenAnZifeng/DeepLearning282.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Explainability Stability Analysis | Instruction-based Image Editing Stability Evaluation 10 prompts, 30 perturbations | Jaccard Index85 | 6 | |
| Fidelity Analysis | gSMILE Fidelity Analysis Prompt: 'Transform the weather to make it snowing' (test) | WMSE0.012 | 3 | |
| Instruction-based Image Editing Consistency | Transform the weather to make it snowing prompt 1000 iterations (30 perturbations) | Variance0.0161 | 3 |