Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prompt Reinjection: Alleviating Prompt Forgetting in Multimodal Diffusion Transformers

About

Multimodal Diffusion Transformers (MMDiTs) for text-to-image generation maintain separate text and image branches, with bidirectional information flow between text tokens and visual latents throughout denoising. In this setting, we observe a prompt forgetting phenomenon: the semantics of the prompt representation in the text branch is progressively forgotten as depth increases. We further verify this effect on three representative MMDiTs--SD3, SD3.5, and FLUX.1 by probing linguistic attributes of the representations over the layers in the text branch. Motivated by these findings, we introduce a training-free approach, prompt reinjection, which reinjects prompt representations from early layers into later layers to alleviate this forgetting. Experiments on GenEval, DPG, and T2I-CompBench++ show consistent gains in instruction-following capability, along with improvements on metrics capturing preference, aesthetics, and overall text--image generation quality.

Yuxuan Yao, Yuxuan Chen, Hui Li, Kaihui Cheng, Qipeng Guo, Yuwei Sun, Zilong Dong, Jingdong Wang, Siyu Zhu• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
GenEval Score89.33
277
Text-to-Image GenerationDPG
Overall Score89.33
131
Text-to-Image GenerationT2I-CompBench++
Non-Spatial0.3197
31
Text-to-Image GenerationCOCO 5k
ImageReward1.3192
8
Showing 4 of 4 rows

Other info

Follow for update