Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities

About

Recent Multimodal Large Language Models (MLLMs) achieve promising performance on visual and audio benchmarks independently. However, the ability of these models to process cross-modal information synchronously remains largely unexplored. We introduce Daily-Omni, a multiple-choice Audio-Visual QA benchmark featuring 684 real-world videos and 1,197 questions spanning 6 task families that explicitly require cross-modal temporal reasoning. To support scalable benchmark construction, we develop a semi-automatic pipeline for annotation, cross-modal consistency refinement, temporal alignment elicitation, and text-only leakage filtering, followed by human verification. We further provide a diagnostic evaluation suite and extensively evaluate 24 foundation models under 37 model--modality settings (Audio+Video / Audio-only / Video-only / Text-only). Finally, we include a training-free modular diagnostic baseline that composes off-the-shelf unimodal models to serve as a diagnostic baseline and to illustrate how explicit temporal alignment signals affect performance. Results indicate that many end-to-end MLLMs still struggle on alignment-critical questions, suggesting that robust cross-modal temporal alignment remains an important open challenge.

Ziwei Zhou, Rui Wang, Zuxuan Wu, Yu-Gang Jiang• 2025

Related benchmarks

TaskDatasetResultRank
Audio-visual understandingDailyOmni
Average Score61.82
69
Showing 1 of 1 rows

Other info

Follow for update