Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Show-o2: Improved Native Unified Multimodal Models

About

This paper presents improved native unified multimodal models, \emph{i.e.,} Show-o2, that leverage autoregressive modeling and flow matching. Built upon a 3D causal variational autoencoder space, unified visual representations are constructed through a dual-path of spatial (-temporal) fusion, enabling scalability across image and video modalities while ensuring effective multimodal understanding and generation. Based on a language model, autoregressive modeling and flow matching are natively applied to the language head and flow head, respectively, to facilitate text token prediction and image/video generation. A two-stage training recipe is designed to effectively learn and scale to larger models. The resulting Show-o2 models demonstrate versatility in handling a wide range of multimodal understanding and generation tasks across diverse modalities, including text, images, and videos. Code and models are released at https://github.com/showlab/Show-o.

Jinheng Xie, Zhenheng Yang, Mike Zheng Shou• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMBench
Accuracy79.3
637
Text-to-Image GenerationGenEval
Overall Score76
506
Multimodal UnderstandingMMMU
Accuracy48.9
437
Video UnderstandingMVBench
Accuracy55.8
425
Text-to-Image GenerationGenEval
Overall Score76
391
Video Question AnsweringActivityNet-QA
Accuracy56.4
376
Text-to-Image GenerationGenEval
GenEval Score76
360
Chart Question AnsweringChartQA
Accuracy40
356
Multimodal UnderstandingSEED-Bench--
343
Multimodal UnderstandingMMStar
Accuracy43.4
324
Showing 10 of 99 rows
...

Other info

Follow for update