Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CTCal: Rethinking Text-to-Image Diffusion Models via Cross-Timestep Self-Calibration

About

Recent advancements in text-to-image synthesis have been largely propelled by diffusion-based models, yet achieving precise alignment between text prompts and generated images remains a persistent challenge. We find that this difficulty arises primarily from the limitations of conventional diffusion loss, which provides only implicit supervision for modeling fine-grained text-image correspondence. In this paper, we introduce Cross-Timestep Self-Calibration (CTCal), founded on the supporting observation that establishing accurate text-image alignment within diffusion models becomes progressively more difficult as the timestep increases. CTCal leverages the reliable text-image alignment (i.e., cross-attention maps) formed at smaller timesteps with less noise to calibrate the representation learning at larger timesteps with more noise, thereby providing explicit supervision during training. We further propose a timestep-aware adaptive weighting to achieve a harmonious integration of CTCal and diffusion loss. CTCal is model-agnostic and can be seamlessly integrated into existing text-to-image diffusion models, encompassing both diffusion-based (e.g., SD 2.1) and flow-based approaches (e.g., SD 3). Extensive experiments on T2I-Compbench++ and GenEval benchmarks demonstrate the effectiveness and generalizability of the proposed CTCal. Our code is available at https://github.com/xiefan-guo/ctcal.

Xiefan Guo, Xinzhu Ma, Haiyu Zhang, Di Huang• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Overall Score69
391
Text-to-Image GenerationT2I-CompBench++
Non-Spatial0.7867
65
Text-to-Image GenerationUser Study SD 2.1
Preference Rate76.67
3
Text-to-Image GenerationUser Study SD 3
Preference Rate54.17
3
Text-to-Image GenerationT2I-CompBench Color ++
M-LPIPS0.634
3
Showing 5 of 5 rows

Other info

Follow for update