Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model

About

Recently, text-to-image denoising diffusion probabilistic models (DDPMs) have demonstrated impressive image generation capabilities and have also been successfully applied to image inpainting. However, in practice, users often require more control over the inpainting process beyond textual guidance, especially when they want to composite objects with customized appearance, color, shape, and layout. Unfortunately, existing diffusion-based inpainting methods are limited to single-modal guidance and require task-specific training, hindering their cross-modal scalability. To address these limitations, we propose Uni-paint, a unified framework for multimodal inpainting that offers various modes of guidance, including unconditional, text-driven, stroke-driven, exemplar-driven inpainting, as well as a combination of these modes. Furthermore, our Uni-paint is based on pretrained Stable Diffusion and does not require task-specific training on specific datasets, enabling few-shot generalizability to customized images. We have conducted extensive qualitative and quantitative evaluations that show our approach achieves comparable results to existing single-modal methods while offering multimodal inpainting capabilities not available in other methods. Code will be available at https://github.com/ysy31415/unipaint.

Shiyuan Yang, Xiaodong Chen, Jing Liao• 2023

Related benchmarks

TaskDatasetResultRank
Object RemovalCOCO 2017 (val)
FID77.58
9
Text-guided image inpaintingHuman evaluation
Quality Score3.37
5
Text-guided image inpaintingMS-COCO
NIMA Score5.363
5
Showing 3 of 3 rows

Other info

Follow for update