Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Omni-Diffusion: Unified Multimodal Understanding and Generation with Masked Discrete Diffusion

About

While recent multimodal large language models (MLLMs) have made impressive strides, they predominantly employ a conventional autoregressive architecture as their backbone, leaving significant room to explore effective and efficient alternatives in architectural design. Concurrently, recent studies have successfully applied discrete diffusion models to various domains, such as visual understanding and image generation, revealing their considerable potential as a promising backbone for multimodal systems. Drawing inspiration from these pioneering research, we introduce Omni-Diffusion, the first any-to-any multimodal language model built entirely on mask-based discrete diffusion models, which unifies understanding and generation across text, speech, and images. Omni-Diffusion employs a unified mask-based discrete diffusion model to directly capture the joint distribution over discrete multimodal tokens. This approach supports not only bimodal tasks but also more complex scenarios involving multiple modalities. On a diverse set of benchmarks, our method outperforms or performs on par with existing multimodal systems that process two or more modalities, highlighting the significant promise of diffusion models in powering the next generation of multimodal foundation models. Project webpage: https://omni-diffusion.github.io.

Lijiang Li, Zuwei Long, Yunhang Shen, Heting Gao, Haoyu Cao, Xing Sun, Caifeng Shan, Ran He, Chaoyou Fu• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringPOPE
Accuracy76.6
102
Automatic Speech RecognitionLibriSpeech
WER0.0705
24
Visual Question AnsweringSEED-Bench-2-Plus
Accuracy34.5
11
Text to ImageMSCOCO
CLIP-I (Image-Text Alignment)0.667
5
Image Question AnsweringMME Perception
MME-P Score1.22e+3
4
Text-to-SpeechLibriTTS
WER3.07
3
Showing 6 of 6 rows

Other info

GitHub

Follow for update