OmniCaptioner: One Captioner to Rule Them All
About
We propose OmniCaptioner, a versatile visual captioning framework for generating fine-grained textual descriptions across a wide variety of visual domains. Unlike prior methods limited to specific image types (e.g., natural images or geometric visuals), our framework provides a unified solution for captioning natural images, visual text (e.g., posters, UIs, textbooks), and structured visuals (e.g., documents, tables, charts). By converting low-level pixel information into semantically rich textual representations, our framework bridges the gap between visual and textual modalities. Our results highlight three key advantages: (i) Enhanced Visual Reasoning with LLMs, where long-context captions of visual modalities empower LLMs, particularly the DeepSeek-R1 series, to reason effectively in multimodal scenarios; (ii) Improved Image Generation, where detailed captions improve tasks like text-to-image generation and image transformation; and (iii) Efficient Supervised Fine-Tuning (SFT), which enables faster convergence with less data. We believe the versatility and adaptability of OmniCaptioner can offer a new perspective for bridging the gap between language and visual modalities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Multimodal Reasoning | MathVista | Accuracy67.5 | 218 | |
| Multimodal Reasoning | MMMU | Accuracy62.2 | 130 | |
| Multimodal Reasoning | WeMath | Accuracy38.7 | 129 | |
| Multimodal Reasoning | MathVision | Accuracy43.3 | 102 | |
| Multimodal Reasoning | LogicVista | Accuracy56.2 | 99 | |
| Multimodal Reasoning | MathVerse | Accuracy48 | 84 | |
| Multimodal Reasoning | DynaMath | Accuracy30.5 | 58 |