Omni-Captioner: Data Pipeline, Models, and Benchmark for Omni Detailed Perception
About
Fine-grained perception of multimodal information is critical for advancing human-AI interaction. With recent progress in audio-visual technologies, Omni Language Models (OLMs), capable of processing audio and video signals in parallel, have emerged as a promising paradigm for achieving richer understanding and reasoning. However, their capacity to capture and describe fine-grained details remains limited explored. In this work, we present a systematic and comprehensive investigation of omni detailed perception from the perspectives of the data pipeline, models, and benchmark. We first identify an inherent "co-growth" between detail and hallucination in current OLMs. To address this, we propose Omni-Detective, an agentic data generation pipeline integrating tool-calling, to autonomously produce highly detailed yet minimally hallucinatory multimodal data. Based on the data generated with Omni-Detective, we train two captioning models: Audio-Captioner for audio-only detailed perception, and Omni-Captioner for audio-visual detailed perception. Under the cascade evaluation protocol, Audio-Captioner achieves the best performance on MMAU and MMAR among all open-source models, surpassing Gemini 2.5 Flash and delivering performance comparable to Gemini 2.5 Pro. On existing detailed captioning benchmarks, Omni-Captioner sets a new state-of-the-art on VDC and achieves the best trade-off between detail and hallucination on the video-SALMONN 2 testset. Given the absence of a dedicated benchmark for omni detailed perception, we design Omni-Cloze, a novel cloze-style evaluation for detailed audio, visual, and audio-visual captioning that ensures stable, efficient, and reliable assessment. Experimental results and analysis demonstrate the effectiveness of Omni-Detective in generating high-quality detailed captions, as well as the superiority of Omni-Cloze in evaluating such detailed captions.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audiovisual Video Captioning | SALMONN 2 (test) | Miss Rate17.8 | 37 | |
| Audio Question Answering | MMAR | Average Score59.8 | 35 | |
| Audiovisual Video Captioning | UGC-VideoCap | Audio Score69 | 26 | |
| Multimodal Cloze | Omni-Cloze Audio | Accuracy53.2 | 18 | |
| Audio Question Answering | MMAU | Score70 | 18 | |
| Multimodal Cloze | Omni-Cloze | Visual Score57 | 16 | |
| Audiovisual Dialogue Description | DiaDemBench | REF43.9 | 15 | |
| Detailed Captioning | VDC Detailed | Accuracy55 | 9 | |
| Audio-Visual Question Answering | Daily-Omni | Score67.9 | 8 | |
| Audio-Visual Question Answering | Video-MME | Score67.1 | 8 |