Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Crab$^{+}$: A Scalable and Unified Audio-Visual Scene Understanding Model with Explicit Cooperation

About

Developing Audio-Visual Large Language Models (AV-LLMs) for unified scene understanding is pivotal in multimodal intelligence. While instruction tuning enables pre-trained models with multi-task abilities, we observe that conventional multi-task unification methods often suffer from severe negative transfer, where nearly 55% of tasks degrade compared to single-task training. We attribute this phenomenon to audio-visual task heterogeneity, characterized by disparate task granularity and divergent capability demands, which lead to negative interference under joint training. To tackle this, we present Crab$^{+}$, a scalable and unified audio-visual scene understanding model that addresses task heterogeneity through explicit cooperation from both data and model perspectives. On the data side, we introduce AV-UIE v2, a comprehensive Audio-Visual Unified Instruction-tuning dataset with Explicit reasoning processes. It contains approximately 222K samples spanning 17 datasets and 7 tasks, enabling the model to capture cross-task relationships at different levels of granularity. On the model side, we design a unified interface to align heterogeneous task formulations, and propose Interaction-aware LoRA (I-LoRA), which explicitly models inter-task relationships via dynamic routing to coordinate distinct audio-visual interaction patterns, mitigating parameter interference. Extensive experiments show Crab$^{+}$ covers broader tasks than existing unified models while outperforming specialized models on various benchmarks. We successfully reverse the negative transfer trend, achieving positive transfer where multi-task learning surpasses single-task baselines in nearly 88% of tasks. These results hold across diverse AV-LLM paradigms and are validated through in-depth visualization, positioning Crab$^{+}$ as a robust step towards holistic audio-visual scene understanding.

Dongnuan Cai, Henghui Du, Chang Zhou, Xi Chen, Dan Guo, Hongyuan Zhang, Xuelong Li, Di Hu• 2026

Related benchmarks

TaskDatasetResultRank
Audio-Visual Event LocalizationAVE
Accuracy83.58
39
Audio-Visual Question AnsweringAVQA
Accuracy92.16
37
Emotion RecognitionCREMA-D--
23
Action RecognitionKS
Accuracy91.12
15
Emotion RecognitionMAFW
Accuracy45.6
14
Audio-Visual Question AnsweringMUSIC-AVQA
Accuracy (Audio)79.44
5
Action RecognitionUCF 51
Accuracy94.04
4
Temporal LocalizationAVVP
Segment-level Score59.47
4
Emotion RecognitionDFEW
Accuracy64.34
4
Emotion RecognitionMELD
Accuracy52.08
4
Showing 10 of 10 rows

Other info

Follow for update