Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CoCoDiff: Correspondence-Consistent Diffusion Model for Fine-grained Style Transfer

About

Transferring visual style between images while preserving semantic correspondence between similar objects remains a central challenge in computer vision. While existing methods have made great strides, most of them operate at global level but overlook region-wise and even pixel-wise semantic correspondence. To address this, we propose CoCoDiff, a novel training-free and low-cost style transfer framework that leverages pretrained latent diffusion models to achieve fine-grained, semantically consistent stylization. We identify that correspondence cues within generative diffusion models are under-explored and that content consistency across semantically matched regions is often neglected. CoCoDiff introduces a pixel-wise semantic correspondence module that mines intermediate diffusion features to construct a dense alignment map between content and style images. Furthermore, a cycle-consistency module then enforces structural and perceptual alignment across iterations, yielding object and region level stylization that preserves geometry and detail. Despite requiring no additional training or supervision, CoCoDiff delivers state-of-the-art visual quality and strong quantitative results, outperforming methods that rely on extra training or annotations.

Wenbo Nie, Zixiang Li, Renshuai Tao, Bin Wu, Yunchao Wei, Yao Zhao• 2026

Related benchmarks

TaskDatasetResultRank
Style TransferMS-COCO (content) + WikiArt (style) (test)
LPIPS0.549
31
Style TransferUser Study 10 content images, 8 style images (test)
Style Score54.6
9
Image Style TransferUnspecified Style Transfer Dataset
FID18.432
6
Showing 3 of 3 rows

Other info

Follow for update