Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer

About

The landscape of high-performance image generation models is currently dominated by proprietary systems, such as Nano Banana Pro and Seedream 4.0. Leading open-source alternatives, including Qwen-Image, Hunyuan-Image-3.0 and FLUX.2, are characterized by massive parameter counts (20B to 80B), making them impractical for inference, and fine-tuning on consumer-grade hardware. To address this gap, we propose Z-Image, an efficient 6B-parameter foundation generative model built upon a Scalable Single-Stream Diffusion Transformer (S3-DiT) architecture that challenges the "scale-at-all-costs" paradigm. By systematically optimizing the entire model lifecycle -- from a curated data infrastructure to a streamlined training curriculum -- we complete the full training workflow in just 314K H800 GPU hours (approx. $630K). Our few-step distillation scheme with reward post-training further yields Z-Image-Turbo, offering both sub-second inference latency on an enterprise-grade H800 GPU and compatibility with consumer-grade hardware (<16GB VRAM). Additionally, our omni-pre-training paradigm also enables efficient training of Z-Image-Edit, an editing model with impressive instruction-following capabilities. Both qualitative and quantitative experiments demonstrate that our model achieves performance comparable to or surpassing that of leading competitors across various dimensions. Most notably, Z-Image exhibits exceptional capabilities in photorealistic image generation and bilingual text rendering, delivering results that rival top-tier commercial models, thereby demonstrating that state-of-the-art results are achievable with significantly reduced computational overhead. We publicly release our code, weights, and online demo to foster the development of accessible, budget-friendly, yet state-of-the-art generative models.

Z-Image Team, Huanqia Cai, Sihan Cao, Ruoyi Du, Peng Gao, Steven Hoi, Zhaohui Hou, Shijie Huang, Dengyang Jiang, Xin Jin, Liangchen Li, Zhen Li, Zhong-Yu Li, David Liu, Dongyang Liu, Junhan Shi, Qilong Wu, Feng Yu, Chi Zhang, Shifeng Zhang, Shilin Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
GenEval Score84
277
Text-to-Image GenerationDPG-Bench
Overall Score88.14
173
Text-to-Image GenerationGenEval (test)
Two Obj. Acc95
169
Text-to-Image GenerationDPG
Overall Score88.14
131
Text-to-Image GenerationGenEval
Overall Score84
68
Text-to-Image GenerationDPG-Bench (test)
Global Fidelity91.29
43
Text-to-Image GenerationDPGBench
DPGBench Score85.15
31
Text RenderingCVTG-2K
NED93.67
28
Spatial Reasoning GenerationOneIG-EN (test)
Alignment Score88.1
26
Text-to-Image GenerationOneIG-ZH
Alignment79.3
24
Showing 10 of 26 rows

Other info

GitHub

Follow for update