Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Simple Baseline for Unifying Understanding, Generation, and Editing via Vanilla Next-token Prediction

About

In this work, we introduce Wallaroo, a simple autoregressive baseline that leverages next-token prediction to unify multi-modal understanding, image generation, and editing at the same time. Moreover, Wallaroo supports multi-resolution image input and output, as well as bilingual support for both Chinese and English. We decouple the visual encoding into separate pathways and apply a four-stage training strategy to reshape the model's capabilities. Experiments are conducted on various benchmarks where Wallaroo produces competitive performance or exceeds other unified models, suggesting the great potential of autoregressive models in unifying multi-modality understanding and generation. Our code is available at https://github.com/JiePKU/Wallaroo.

Jie Zhu, Hanghang Ma, Jia Wang, Yayong Guan, Yanbing Zeng, Lishuai Gao, Junqiang Wu, Jie Hu, Leye Wang• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMBench--
637
Multimodal UnderstandingMM-Vet
MM-Vet Score50.1
531
Multimodal UnderstandingSEED-Bench--
343
Text-to-Image GenerationGenEval (test)
Two Obj. Acc81
221
Text-to-Image GenerationDPG
Overall Score79.35
172
Multimodal UnderstandingPOPE
POPE Score0.864
90
Multimodal UnderstandingMMMU
MMMU Score42.7
69
Multimodal UnderstandingMME Perception
MME-P Score1.69e+3
46
Multi-modal Vision-Language UnderstandingGQA
Accuracy60.1
36
Image EditingImgEdit
Overall Score2.92
22
Showing 10 of 10 rows

Other info

Follow for update