Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation

About

In recent years, numerous tasks have been proposed to encourage model to develop specified capability in understanding audio-visual scene, primarily categorized into temporal localization, spatial localization, spatio-temporal reasoning, and pixel-level understanding. Instead, human possesses a unified understanding ability for diversified tasks. Therefore, designing an audio-visual model with general capability to unify these tasks is of great value. However, simply joint training for all tasks can lead to interference due to the heterogeneity of audiovisual data and complex relationship among tasks. We argue that this problem can be solved through explicit cooperation among tasks. To achieve this goal, we propose a unified learning method which achieves explicit inter-task cooperation from both the perspectives of data and model thoroughly. Specifically, considering the labels of existing datasets are simple words, we carefully refine these datasets and construct an Audio-Visual Unified Instruction-tuning dataset with Explicit reasoning process (AV-UIE), which clarifies the cooperative relationship among tasks. Subsequently, to facilitate concrete cooperation in learning stage, an interaction-aware LoRA structure with multiple LoRA heads is designed to learn different aspects of audiovisual data interaction. By unifying the explicit cooperation across the data and model aspect, our method not only surpasses existing unified audio-visual model on multiple tasks, but also outperforms most specialized models for certain tasks. Furthermore, we also visualize the process of explicit cooperation and surprisingly find that each LoRA head has certain audio-visual understanding ability. Code and dataset: https://github.com/GeWu-Lab/Crab

Henghui Du, Guangyao Li, Chang Zhou, Chunjie Zhang, Alan Zhao, Di Hu• 2025

Related benchmarks

TaskDatasetResultRank
Audio-Visual Question AnsweringMUSIC-AVQA (test)
Acc (Avg)78.94
59
Audio-Visual Event LocalizationAVE (test)
Accuracy80.15
37
Audio-Visual Event LocalizationAVE
Accuracy80.15
35
Audio-Visual SegmentationAVSBench MS3 (test)
Jaccard Index (IoU)58.21
30
Audio-Visual Question AnsweringAVQA
Accuracy78.94
14
Audio-Visual SegmentationAVS-Bench S4 (test)
mIoU73.25
9
Referring Audio-Visual SegmentationRef-AVS Seen (test)
mIoU4.05e+3
5
Referring Audio-Visual SegmentationRef-AVS
Seen Score4.05e+3
5
Audio-Visual SegmentationAVS-Bench AVSS (test)
mIoU26.59
5
Referring Audio-Visual SegmentationRef-AVS Unseen (test)
mIoU45.55
5
Showing 10 of 19 rows

Other info

Code

Follow for update