Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mogao: An Omni Foundation Model for Interleaved Multi-Modal Generation

About

Recent progress in unified models for image understanding and generation has been impressive, yet most approaches remain limited to single-modal generation conditioned on multiple modalities. In this paper, we present Mogao, a unified framework that advances this paradigm by enabling interleaved multi-modal generation through a causal approach. Mogao integrates a set of key technical improvements in architecture design, including a deep-fusion design, dual vision encoders, interleaved rotary position embeddings, and multi-modal classifier-free guidance, which allow it to harness the strengths of both autoregressive models for text generation and diffusion models for high-quality image synthesis. These practical improvements also make Mogao particularly effective to process interleaved sequences of text and images arbitrarily. To further unlock the potential of unified models, we introduce an efficient training strategy on a large-scale, in-house dataset specifically curated for joint text and image generation. Extensive experiments show that Mogao not only achieves state-of-the-art performance in multi-modal understanding and text-to-image generation, but also excels in producing high-quality, coherent interleaved outputs. Its emergent capabilities in zero-shot image editing and compositional generation highlight Mogao as a practical omni-modal foundation model, paving the way for future development and scaling the unified multi-modal systems.

Chao Liao, Liyang Liu, Xun Wang, Zhengxiong Luo, Xinyu Zhang, Wenliang Zhao, Jie Wu, Liang Li, Zhi Tian, Weilin Huang• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Overall Score89
467
Multimodal UnderstandingMMBench--
367
Text-to-Image GenerationDPG-Bench
Overall Score84.33
173
Text-to-Image GenerationDPG
Overall Score84.33
131
Multimodal UnderstandingMMMU
MMMU Score44.2
78
Multimodal UnderstandingMultimodal Understanding Benchmarks (CQA, DVQA, TVQA, IVQA, OCRB, MMB, MMMU, MVista, AI2D, MMS, MMV) official (test)
MMB75
15
Text-to-Image GenerationDPG-Bench
DPG Score84.33
7
Showing 7 of 7 rows

Other info

Follow for update