Crab$^{+}$: A Scalable and Unified Audio-Visual Scene Understanding Model with Explicit Cooperation
About
Developing Audio-Visual Large Language Models (AV-LLMs) for unified scene understanding is pivotal in multimodal intelligence. While instruction tuning enables pre-trained models with multi-task abilities, we observe that conventional multi-task unification methods often suffer from severe negative transfer, where nearly 55% of tasks degrade compared to single-task training. We attribute this phenomenon to audio-visual task heterogeneity, characterized by disparate task granularity and divergent capability demands, which lead to negative interference under joint training. To tackle this, we present Crab$^{+}$, a scalable and unified audio-visual scene understanding model that addresses task heterogeneity through explicit cooperation from both data and model perspectives. On the data side, we introduce AV-UIE v2, a comprehensive Audio-Visual Unified Instruction-tuning dataset with Explicit reasoning processes. It contains approximately 222K samples spanning 17 datasets and 7 tasks, enabling the model to capture cross-task relationships at different levels of granularity. On the model side, we design a unified interface to align heterogeneous task formulations, and propose Interaction-aware LoRA (I-LoRA), which explicitly models inter-task relationships via dynamic routing to coordinate distinct audio-visual interaction patterns, mitigating parameter interference. Extensive experiments show Crab$^{+}$ covers broader tasks than existing unified models while outperforming specialized models on various benchmarks. We successfully reverse the negative transfer trend, achieving positive transfer where multi-task learning surpasses single-task baselines in nearly 88% of tasks. These results hold across diverse AV-LLM paradigms and are validated through in-depth visualization, positioning Crab$^{+}$ as a robust step towards holistic audio-visual scene understanding.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio-Visual Event Localization | AVE | Accuracy83.58 | 39 | |
| Audio-Visual Question Answering | AVQA | Accuracy92.16 | 37 | |
| Emotion Recognition | CREMA-D | -- | 23 | |
| Action Recognition | KS | Accuracy91.12 | 15 | |
| Emotion Recognition | MAFW | Accuracy45.6 | 14 | |
| Audio-Visual Question Answering | MUSIC-AVQA | Accuracy (Audio)79.44 | 5 | |
| Action Recognition | UCF 51 | Accuracy94.04 | 4 | |
| Temporal Localization | AVVP | Segment-level Score59.47 | 4 | |
| Emotion Recognition | DFEW | Accuracy64.34 | 4 | |
| Emotion Recognition | MELD | Accuracy52.08 | 4 |