SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
About
This paper presents SANA-1.5, a linear Diffusion Transformer for efficient scaling in text-to-image generation. Building upon SANA-1.0, we introduce three key innovations: (1) Efficient Training Scaling: A depth-growth paradigm that enables scaling from 1.6B to 4.8B parameters with significantly reduced computational resources, combined with a memory-efficient 8-bit optimizer. (2) Model Depth Pruning: A block importance analysis technique for efficient model compression to arbitrary sizes with minimal quality loss. (3) Inference-time Scaling: A repeated sampling strategy that trades computation for model capacity, enabling smaller models to match larger model quality at inference time. Through these strategies, SANA-1.5 achieves a text-image alignment score of 0.81 on GenEval, which can be further improved to 0.96 through inference scaling with VILA-Judge, establishing a new SoTA on GenEval benchmark. These innovations enable efficient model scaling across different compute budgets while maintaining high quality, making high-quality image generation more accessible. Our code and pre-trained models are released.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | GenEval | Overall Score81 | 467 | |
| Text-to-Image Generation | GenEval | GenEval Score81 | 277 | |
| Text-to-Image Generation | DPG-Bench | Overall Score85 | 173 | |
| Text-to-Image Generation | GenEval (test) | Two Obj. Acc93 | 169 | |
| Text-to-Image Generation | GenEval 1.0 (test) | Overall Score80.62 | 63 | |
| Composition Image Generation | GenEval | GenEval Score62.48 | 20 | |
| Text-to-Image Generation | TIIF Bench mini (test) | Overall Score (Short)67.15 | 18 | |
| Text-to-Image Generation | OneIG-EN 7 | Alignment76.5 | 16 |