Prompt Reinjection: Alleviating Prompt Forgetting in Multimodal Diffusion Transformers
About
Multimodal Diffusion Transformers (MMDiTs) for text-to-image generation maintain separate text and image branches, with bidirectional information flow between text tokens and visual latents throughout denoising. In this setting, we observe a prompt forgetting phenomenon: the semantics of the prompt representation in the text branch is progressively forgotten as depth increases. We further verify this effect on three representative MMDiTs--SD3, SD3.5, and FLUX.1 by probing linguistic attributes of the representations over the layers in the text branch. Motivated by these findings, we introduce a training-free approach, prompt reinjection, which reinjects prompt representations from early layers into later layers to alleviate this forgetting. Experiments on GenEval, DPG, and T2I-CompBench++ show consistent gains in instruction-following capability, along with improvements on metrics capturing preference, aesthetics, and overall text--image generation quality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | GenEval | GenEval Score89.33 | 277 | |
| Text-to-Image Generation | DPG | Overall Score89.33 | 131 | |
| Text-to-Image Generation | T2I-CompBench++ | Non-Spatial0.3197 | 31 | |
| Text-to-Image Generation | COCO 5k | ImageReward1.3192 | 8 |