Listen First, Then Answer: Timestamp-Grounded Speech Reasoning
About
Large audio-language models (LALMs) can generate reasoning chains for their predictions, but it remains unclear whether these reasoning chains remain grounded in the input audio. In this paper, we propose an RL-based strategy that grounds the reasoning outputs of LALMs with explicit timestamp annotations referring to relevant segments of the audio signal. Our analysis shows that timestamp grounding leads the model to attend more strongly to audio tokens during reasoning generation. Experiments on four speech-based benchmark datasets demonstrate that our approach improves performance compared to both zero-shot reasoning and fine-tuning without timestamp grounding. Additionally, grounding amplifies desirable reasoning behaviors, such as region exploration, audiology verification, and consistency, underscoring the importance of grounding mechanisms for faithful multimodal reasoning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Emotion Recognition in Conversation | MELD | -- | 16 | |
| Speech Reasoning | MMAU Speech mini | Speech Score74.47 | 11 | |
| Speech Reasoning | MMAR-Speech | Speech Accuracy64.63 | 11 | |
| Speech Understanding | AIR-Bench | SER58.5 | 10 |