Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

STAR: STacked AutoRegressive Scheme for Unified Multimodal Learning

About

Multimodal large language models (MLLMs) play a pivotal role in advancing the quest for general artificial intelligence. However, achieving unified target for multimodal understanding and generation remains challenging due to optimization conflicts and performance trade-offs. To effectively enhance generative performance while preserving existing comprehension capabilities, we introduce STAR: a STacked AutoRegressive scheme for task-progressive unified multimodal learning. This approach decomposes multimodal learning into multiple stages: understanding, generation, and editing. By freezing the parameters of the fundamental autoregressive (AR) model and progressively stacking isomorphic AR modules, it avoids cross-task interference while expanding the model's capabilities. Concurrently, we introduce a high-capacity VQ to enhance the granularity of image representations and employ an implicit reasoning mechanism to improve generation quality under complex conditions. Experiments demonstrate that STAR achieves state-of-the-art performance on GenEval (0.91), DPG-Bench (87.44), and ImgEdit (4.34), validating its efficacy for unified multimodal learning.

Jie Qin, Jiancheng Huang, Limeng Qiao, Lin Ma• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy86.6
935
Text-to-Image GenerationGenEval
Overall Score91
467
Mathematical ReasoningMathVista
Score68.1
322
Multimodal UnderstandingSEED-Bench--
203
Multimodal UnderstandingMMStar--
197
Text-to-Image GenerationDPG-Bench
Overall Score87.44
173
Image EditingImgEdit-Bench
Overall Score4.34
132
Multimodal UnderstandingMMMU
MMMU Score58.6
78
Optical Character Recognition EvaluationOCRBench
Score86.4
46
Knowledge-grounded reasoningWISE
Overall Score66
45
Showing 10 of 14 rows

Other info

Follow for update