Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AttenST: A Training-Free Attention-Driven Style Transfer Framework with Pre-Trained Diffusion Models

About

While diffusion models have achieved remarkable progress in style transfer tasks, existing methods typically rely on fine-tuning or optimizing pre-trained models during inference, leading to high computational costs and challenges in balancing content preservation with style integration. To address these limitations, we introduce AttenST, a training-free attention-driven style transfer framework. Specifically, we propose a style-guided self-attention mechanism that conditions self-attention on the reference style by retaining the query of the content image while substituting its key and value with those from the style image, enabling effective style feature integration. To mitigate style information loss during inversion, we introduce a style-preserving inversion strategy that refines inversion accuracy through multiple resampling steps. Additionally, we propose a content-aware adaptive instance normalization, which integrates content statistics into the normalization process to optimize style fusion while mitigating the content degradation. Furthermore, we introduce a dual-feature cross-attention mechanism to fuse content and style features, ensuring a harmonious synthesis of structural fidelity and stylistic expression. Extensive experiments demonstrate that AttenST outperforms existing methods, achieving state-of-the-art performance in style transfer dataset.

Bo Huang, Wenlun Xu, Qizhuo Han, Haodong Jing, Ying Li• 2025

Related benchmarks

TaskDatasetResultRank
Artistic transferWikiArt
FID (Style)19.091
11
Photo-realistic transferMSCOCO
FID (Style)25.176
11
Image Style TransferStyle Transfer 750 images (test)
Style Score0.5032
10
Showing 3 of 3 rows

Other info

Follow for update