Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EMMA: Efficient Multimodal Understanding, Generation, and Editing with a Unified Architecture

About

We propose EMMA, an efficient and unified architecture for multimodal understanding, generation and editing. Specifically, EMMA primarily consists of 1) An efficient autoencoder with a 32x compression ratio, which significantly reduces the number of tokens required for generation. This also ensures the training balance between understanding and generation tasks by applying the same compression ratio to images. 2) Channel-wise concatenation instead of token-wise concatenation among visual understanding and generation tokens, which further reduces the visual tokens in unified architectures. 3) A shared-and-decoupled network that enables mutual improvements across tasks while meeting the task-specific modeling requirements. 4) A mixture-of-experts mechanism adopted for visual understanding encoder, which substantially improves perceptual capabilities with a few parameters increase. Extensive experiments have shown that EMMA-4B can significantly outperform state-of-the-art unified multimodal approaches (e.g., BAGEL-7B) in both efficiency and performance, while also achieving competitive results compared to recent multimodal understanding and generation experts (e.g., Qwen3-VL and Qwen-Image). We believe that EMMA lays a solid foundation for the future development of unified multimodal architectures.

Xin He, Longhui Wei, Jianbo Ouyang, Minghui Liao, Lingxi Xie, Qi Tian• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Overall Score93
467
Multimodal UnderstandingMM-Vet
MM-Vet Score73
418
Multimodal UnderstandingMMBench--
367
Text-to-Image GenerationDPG-Bench
Overall Score85.63
173
Multimodal UnderstandingMMMU
MMMU Score62.5
78
Mathematical ReasoningMathVista (testmini)
Accuracy75.8
51
Instructive image editingEMU Edit (test)
CLIP Image Similarity0.911
46
Visual ReasoningMM-Vet
Score73
34
Text-to-Image GenerationDPGBench
DPGBench Score85.63
31
Multimodal UnderstandingMultimodal Understanding Benchmarks (CQA, DVQA, TVQA, IVQA, OCRB, MMB, MMMU, MVista, AI2D, MMS, MMV) official (test)
MMB85.8
15
Showing 10 of 17 rows

Other info

GitHub

Follow for update