Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

WeGen: A Unified Model for Interactive Multimodal Generation as We Chat

About

Existing multimodal generative models fall short as qualified design copilots, as they often struggle to generate imaginative outputs once instructions are less detailed or lack the ability to maintain consistency with the provided references. In this work, we introduce WeGen, a model that unifies multimodal generation and understanding, and promotes their interplay in iterative generation. It can generate diverse results with high creativity for less detailed instructions. And it can progressively refine prior generation results or integrating specific contents from references following the instructions in its chat with users. During this process, it is capable of preserving consistency in the parts that the user is already satisfied with. To this end, we curate a large-scale dataset, extracted from Internet videos, containing rich object dynamics and auto-labeled dynamics descriptions by advanced foundation models to date. These two information are interleaved into a single sequence to enable WeGen to learn consistency-aware generation where the specified dynamics are generated while the consistency of unspecified content is preserved aligned with instructions. Besides, we introduce a prompt self-rewriting mechanism to enhance generation diversity. Extensive experiments demonstrate the effectiveness of unifying multimodal understanding and generation in WeGen and show it achieves state-of-the-art performance across various visual generation benchmarks. These also demonstrate the potential of WeGen as a user-friendly design copilot as desired. The code and models will be available at https://github.com/hzphzp/WeGen.

Zhipeng Huang, Shaobin Zhuang, Canmiao Fu, Binxin Yang, Ying Zhang, Chong Sun, Zhizheng Zhang, Yali Wang, Chen Li, Zheng-Jun Zha• 2025

Related benchmarks

TaskDatasetResultRank
OCR EvaluationOCRBench
Score345
296
Visual Question AnsweringScienceQA
Accuracy63.1
210
Visual UnderstandingMM-Vet
MM-Vet Score25.4
102
Hallucination and Visual Reasoning EvaluationHallusionBench
Score30.4
37
Visual UnderstandingMME
MME Score447.4
37
Vision UnderstandingMMMU
Overall Score26.6
28
Multi-modal Visual CapabilityMMStar
Score27.5
20
Text-to-Image GenerationCOCO 2014
FID9.39
15
Subject-driven generationDreamBench
DINO Score0.823
14
Visual UnderstandingMMT
Score28.4
8
Showing 10 of 12 rows

Other info

Code

Follow for update