Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Amber-Image: Efficient Compression of Large-Scale Diffusion Transformers

About

Diffusion Transformer (DiT) architectures have significantly advanced Text-to-Image (T2I) generation but suffer from prohibitive computational costs and deployment barriers. To address these challenges, we propose an efficient compression framework that transforms the 60-layer dual-stream MMDiT-based Qwen-Image into lightweight models without training from scratch. Leveraging this framework, we introduce Amber-Image, a series of streamlined T2I models. We first derive Amber-Image-10B using a timestep-sensitive depth pruning strategy, where retained layers are reinitialized via local weight averaging and optimized through layer-wise distillation and full-parameter fine-tuning. Building on this, we develop Amber-Image-6B by introducing a hybrid-stream architecture that converts deep-layer dual streams into a single stream initialized from the image branch, further refined via progressive distillation and lightweight fine-tuning. Our approach reduces parameters by 70% and eliminates the need for large-scale data engineering. Notably, the entire compression and training pipeline-from the 10B to the 6B variant-requires fewer than 2,000 GPU hours, demonstrating exceptional cost-efficiency compared to training from scratch. Extensive evaluations on benchmarks like DPG-Bench and LongText-Bench show that Amber-Image achieves high-fidelity synthesis and superior text rendering, matching much larger models.

Chaojie Yang, Tian Li, Yue Zhang, Jun Gao• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationDPG
Overall Score89.61
131
Text-to-Image GenerationGenEval
Overall Score88.3
68
Text RenderingCVTG-2K
NED89.38
28
Spatial Reasoning GenerationOneIG-EN (test)
Alignment Score86.7
26
Text-to-Image GenerationOneIG-ZH
Alignment79.8
24
Text RenderingLongText-Bench Chinese
Score0.915
13
Text RenderingLongText-Bench English
Score0.911
13
Showing 7 of 7 rows

Other info

Follow for update