Lumina-DiMOO: An Omni Diffusion Large Language Model for Multi-Modal Generation and Understanding
About
We introduce Lumina-DiMOO, an open-source foundational model for seamless multi-modal generation and understanding. Lumina-DiMOO sets itself apart from prior unified models by utilizing a fully discrete diffusion modeling to handle inputs and outputs across various modalities. This innovative approach allows Lumina-DiMOO to achieve higher sampling efficiency compared to previous autoregressive (AR) or hybrid AR-Diffusion paradigms and adeptly support a broad spectrum of multi-modal tasks, including text-to-image generation, image-to-image generation (e.g., image editing, subject-driven generation, and image inpainting, etc.), as well as image understanding. Lumina-DiMOO achieves state-of-the-art performance on multiple benchmarks, surpassing existing open-source unified multi-modal models. To foster further advancements in multi-modal and discrete diffusion model research, we release our code and checkpoints to the community. Project Page: https://synbol.github.io/Lumina-DiMOO.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | Accuracy87.4 | 1455 | |
| Multimodal Understanding | MMBench | Accuracy83.1 | 637 | |
| Text-to-Image Generation | GenEval | Overall Score91 | 506 | |
| Multimodal Understanding | MMMU | Accuracy58.6 | 437 | |
| Text-to-Image Generation | GenEval | Overall Score87.83 | 391 | |
| Text-to-Image Generation | GenEval | GenEval Score88 | 360 | |
| Text-to-Image Generation | DPG-Bench | Overall Score86.04 | 265 | |
| Diagram Understanding | AI2D | Accuracy43.2 | 247 | |
| Text-to-Image Generation | GenEval | Overall Score88 | 218 | |
| Multimodal Understanding | MME | -- | 207 |