Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OpenUni: A Simple Baseline for Unified Multimodal Understanding and Generation

About

In this report, we present OpenUni, a simple, lightweight, and fully open-source baseline for unifying multimodal understanding and generation. Inspired by prevailing practices in unified model learning, we adopt an efficient training strategy that minimizes the training complexity and overhead by bridging the off-the-shelf multimodal large language models (LLMs) and diffusion models through a set of learnable queries and a light-weight transformer-based connector. With a minimalist choice of architecture, we demonstrate that OpenUni can: 1) generate high-quality and instruction-aligned images, and 2) achieve exceptional performance on standard benchmarks such as GenEval, DPG- Bench, and WISE, with only 1.1B and 3.1B activated parameters. To support open research and community advancement, we release all model weights, training code, and our curated training datasets (including 23M image-text pairs) at https://github.com/wusize/OpenUni.

Size Wu, Zhonghua Wu, Zerui Gong, Qingyi Tao, Sheng Jin, Qinyue Li, Wei Li, Chen Change Loy• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationDPG-Bench
Overall Score83.08
173
Vision UnderstandingMMBench
Accuracy81.19
104
Text-to-Image GenerationGenEval
Overall Score86
68
Reasoning-based text-to-image generationWISE
Overall Score52
33
Spatial UnderstandingRealworldQA
RWQA Score65.23
30
Spatial Relationship UnderstandingVSR
Overall Accuracy66.69
17
Fine-Grained PerceptionMMVP
Accuracy71.67
14
Hallucination MitigationHallusion
Accuracy60.88
10
Showing 8 of 8 rows

Other info

Follow for update