Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Global Text Conditioning in Diffusion Transformers

About

Diffusion transformers typically incorporate textual information via attention layers and a modulation mechanism using a pooled text embedding. Nevertheless, recent approaches discard modulation-based text conditioning and rely exclusively on attention. In this paper, we address whether modulation-based text conditioning is necessary and whether it can provide any performance advantage. Our analysis shows that, in its conventional usage, the pooled embedding contributes little to overall performance, suggesting that attention alone is generally sufficient for faithfully propagating prompt information. However, we reveal that the pooled embedding can provide significant gains when used from a different perspective-serving as guidance and enabling controllable shifts toward more desirable properties. This approach is training-free, simple to implement, incurs negligible runtime overhead, and can be applied to various diffusion models, bringing improvements across diverse tasks, including text-to-image/video generation and image editing.

Nikita Starodubcev, Daniil Pakhomov, Zongze Wu, Ilya Drobyshevskiy, Yuchen Liu, Zhonghao Wang, Yuqian Zhou, Zhe Lin, Dmitry Baranchuk• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationCOCO 5k 2014 (val)
PickScore23.5
16
Text-to-Video GenerationVBench (test)
Total Score65.43
14
Text-to-Image GenerationPartiPrompts, CompBench, and LLM-generated prompts Custom 2024 (test)
Relevance53
11
Showing 3 of 3 rows

Other info

GitHub

Follow for update