Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TAC: Timestamped Audio Captioning

About

Large Audio Language Models struggle to disentangle overlapping events in complex acoustic scenes, yielding temporally inconsistent captions and frequent hallucinations. We introduce Timestamped Audio Captioner (TAC), a model that produces temporally grounded audio descriptions at varying degrees of detail and resolution. TAC is trained with a synthetic data pipeline that constructs challenging and dynamic mixtures from real-world audio sources, enabling robust learning under realistic polyphonic conditions. Across event detection and dense captioning, TAC outperforms all competing methods, with a low hallucination rate and accurate temporal grounding. We also introduce TAC-V, an audio-visual pipeline to generate semantically rich audio-visual descriptions. We then show that TAC and TAC-V serves as a "semantic bridge" for a text-only reasoner: a simple TAC$\rightarrow$LLM and TAC-V$\rightarrow$LLM cascade achieves state-of-the-art scores on benchmarks for both audio (MMAU-Pro, MMSU, MMAR) and audio-visual (DailyOmni, VideoHolmes) understanding and reasoning respectively.

Sonal Kumar, Prem Seetharaman, Ke Chen, Oriol Nieto, Jiaqi Su, Zhepei Wang, Rithesh Kumar, Dinesh Manocha, Nicholas J. Bryan, Zeyu Jin, Justin Salamon• 2026

Related benchmarks

TaskDatasetResultRank
Audio-visual understandingAVHBench
Overall Score81.7
8
Audiovisual Understanding & ReasoningDaily-Omni
Score77.9
6
Audiovisual Understanding & ReasoningWorld-Sense
Score58.6
5
Audiovisual Understanding & ReasoningVideo-Holmes
Score59.2
4
Audiovisual Understanding & ReasoningAVHBench AVM
Score61.6
4
Audiovisual Understanding & ReasoningAVHBench AVC
Score22.6
4
Audio Understanding & ReasoningMMAU Sound
Score79.7
3
Audio Understanding & ReasoningMMAU Speech
Score79.3
3
Audio Understanding & ReasoningMMAR
Score71.9
3
Audio Understanding & ReasoningMMSU
Score0.724
3
Showing 10 of 13 rows

Other info

Follow for update