Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Uni-X: Mitigating Modality Conflict with a Two-End-Separated Architecture for Unified Multimodal Models

About

Unified Multimodal Models (UMMs) built on shared autoregressive (AR) transformers are attractive for their architectural simplicity. However, we identify a critical limitation: when trained on multimodal inputs, modality-shared transformers suffer from severe gradient conflicts between vision and text, particularly in shallow and deep layers. We trace this issue to the fundamentally different low-level statistical properties of images and text, while noting that conflicts diminish in middle layers where representations become more abstract and semantically aligned. To overcome this challenge, we propose Uni-X, a two-end-separated, middle-shared architecture. Uni-X dedicates its initial and final layers to modality-specific processing, while maintaining shared parameters in the middle layers for high-level semantic fusion. This X-shaped design not only eliminates gradient conflicts at both ends but also further alleviates residual conflicts in the shared layers. Extensive experiments validate the effectiveness of Uni-X. Under identical training conditions, Uni-X achieves superior training efficiency compared to strong baselines. When scaled to 3B parameters with larger training data, Uni-X matches or surpasses 7B AR-based UMMs, achieving a GenEval score of 82 for image generation alongside strong performance in text and vision understanding tasks. These results establish Uni-X as a parameter-efficient and scalable foundation for future unified multimodal modeling. Our code is available at https://github.com/CURRENTF/Uni-X

Jitai Hao, Hao Liu, Xinyan Xiao, Qiang Huang, Jun Yu• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMBench--
637
Multimodal UnderstandingSEED-Bench--
343
Multimodal UnderstandingMME
MME Score1.23e+3
207
Image EditingImgEdit-Bench
Overall Score3.44
191
Text-to-Image GenerationT2I-CompBench
Shape Fidelity56.3
185
Multimodal UnderstandingPOPE
POPE Score0.846
90
Image GenerationGenEval
Overall Score83
57
Image GenerationDPG
DPG Score80.3
47
Language UnderstandingARC, WinoGrande, BoolQ, MMLU
ARC-E Accuracy79
5
Text-to-Image GenerationMSCOCO
CLIP-T31.8
3
Showing 10 of 10 rows

Other info

Follow for update