Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on

About

We present OOTDiffusion, a novel network architecture for realistic and controllable image-based virtual try-on (VTON). We leverage the power of pretrained latent diffusion models, designing an outfitting UNet to learn the garment detail features. Without a redundant warping process, the garment features are precisely aligned with the target human body via the proposed outfitting fusion in the self-attention layers of the denoising UNet. In order to further enhance the controllability, we introduce outfitting dropout to the training process, which enables us to adjust the strength of the garment features through classifier-free guidance. Our comprehensive experiments on the VITON-HD and Dress Code datasets demonstrate that OOTDiffusion efficiently generates high-quality try-on results for arbitrary human and garment images, which outperforms other VTON methods in both realism and controllability, indicating an impressive breakthrough in virtual try-on. Our source code is available at https://github.com/levihsu/OOTDiffusion.

Yuhao Xu, Tao Gu, Weifeng Chen, Chengcai Chen• 2024

Related benchmarks

TaskDatasetResultRank
Virtual Try-OnVITON-HD (test)
SSIM85.13
57
Image Virtual Try-onVITON-HD
LPIPS0.071
41
Virtual Try-OnVITON-HD paired
LPIPS0.107
29
Virtual Try-OnVITON-HD 1.0 (test)
FID6.5186
27
Virtual Try-OnDressCode (test)
FID3.9497
23
Virtual Try-OnDressCode
LPIPS0.045
19
Virtual Try-OnVITON paired HD (test)
FID9.3
19
Virtual Try-On and AnimationInternet Dataset
L1 Loss0.1143
18
Virtual Try-On and AnimationViViD Dataset
L10.2101
18
Virtual Try-OnVITON-HD unpaired
FID39.9626
17
Showing 10 of 32 rows

Other info

Follow for update