NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
About
Prevailing autoregressive (AR) models for text-to-image generation either rely on heavy, computationally-intensive diffusion models to process continuous image tokens, or employ vector quantization (VQ) to obtain discrete tokens with quantization loss. In this paper, we push the autoregressive paradigm forward with NextStep-1, a 14B autoregressive model paired with a 157M flow matching head, training on discrete text tokens and continuous image tokens with next-token prediction objectives. NextStep-1 achieves state-of-the-art performance for autoregressive models in text-to-image generation tasks, exhibiting strong capabilities in high-fidelity image synthesis. Furthermore, our method shows strong performance in image editing, highlighting the power and versatility of our unified approach. To facilitate open research, we will release our code and models to the community.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | GenEval | Overall Score73 | 467 | |
| Text-to-Image Generation | GenEval | GenEval Score63 | 277 | |
| Text-to-Image Generation | DPG-Bench | Overall Score85.28 | 173 | |
| Text-to-Image Generation | GenEval (test) | -- | 169 | |
| Text-to-Image Generation | DPG | Overall Score85.28 | 131 | |
| Text-to-Image Generation | MJHQ-30K | Overall FID6.71 | 59 | |
| Text-to-Image Generation | GenEval 1024x1024 | Latency (s)402 | 22 | |
| Text-to-Image Generation | OneIG-EN 7 | Alignment82.6 | 16 | |
| Ad-hoc Constraint Execution (Visual Constraint) | Genius | RC Score21.5 | 13 | |
| Ad-hoc Constraint Execution (Symbolic Constraint) | Genius | RC11.33 | 13 |