Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Show-o2: Improved Native Unified Multimodal Models

About

This paper presents improved native unified multimodal models, \emph{i.e.,} Show-o2, that leverage autoregressive modeling and flow matching. Built upon a 3D causal variational autoencoder space, unified visual representations are constructed through a dual-path of spatial (-temporal) fusion, enabling scalability across image and video modalities while ensuring effective multimodal understanding and generation. Based on a language model, autoregressive modeling and flow matching are natively applied to the language head and flow head, respectively, to facilitate text token prediction and image/video generation. A two-stage training recipe is designed to effectively learn and scale to larger models. The resulting Show-o2 models demonstrate versatility in handling a wide range of multimodal understanding and generation tasks across diverse modalities, including text, images, and videos. Code and models are released at https://github.com/showlab/Show-o.

Jinheng Xie, Zhenheng Yang, Mike Zheng Shou• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Overall Score76
467
Multimodal UnderstandingMMBench--
367
Text-to-Image GenerationGenEval
GenEval Score76
277
Multimodal UnderstandingSEED-Bench--
203
Multimodal UnderstandingMMStar--
197
Text-to-Image GenerationDPG-Bench
Overall Score86.14
173
Text-to-Image GenerationGenEval (test)
Two Obj. Acc87
169
Text-to-Image GenerationDPG
Overall Score86.1
131
Multimodal UnderstandingMMMU (test)
MMMU Score48.9
86
Multimodal UnderstandingMMMU
MMMU Score48.9
78
Showing 10 of 45 rows

Other info

Follow for update