Muddit: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion Model
About
Unified generation models aim to handle diverse tasks across modalities -- such as text generation, image generation, and vision-language reasoning -- within a single architecture and decoding paradigm. Autoregressive unified models suffer from slow inference due to sequential decoding, and non-autoregressive unified models suffer from weak generalization due to limited pretrained backbones. We introduce the second-generation Meissonic: Muddit, a unified discrete diffusion transformer that enables fast and parallel generation across both text and image modalities. Unlike prior unified diffusion models trained from scratch, Muddit integrates strong visual priors from a pretrained text-to-image backbone with a lightweight text decoder, enabling flexible and high-quality multimodal generation under a unified architecture. Empirical results show that Muddit achieves competitive or superior performance compared to significantly larger autoregressive models in both quality and efficiency. The work highlights the potential of purely discrete diffusion, when equipped with strong visual priors, as a scalable and effective backbone for unified generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Generation | GenEval | Overall Score90 | 506 | |
| Text-to-Image Generation | GenEval | Overall Score61 | 391 | |
| Visual Question Answering | GQA | Mean Accuracy57.8 | 196 | |
| Multimodal Understanding | MMMU (val) | -- | 152 | |
| Visual Question Answering | VQA v2 (test) | Accuracy70.2 | 142 | |
| Mathematical Reasoning | MathVista (testmini) | Accuracy79.1 | 103 | |
| Image Captioning | MS-COCO | CIDEr60.1 | 69 | |
| Text-to-Image Generation | DPGBench | DPGBench Score86.37 | 57 | |
| Visual Reasoning | MM-Vet | Score76.2 | 40 | |
| Multi-modal Question Answering | MMMU | Accuracy28.7 | 23 |