Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BAVS: Bootstrapping Audio-Visual Segmentation by Integrating Foundation Knowledge

About

Given an audio-visual pair, audio-visual segmentation (AVS) aims to locate sounding sources by predicting pixel-wise maps. Previous methods assume that each sound component in an audio signal always has a visual counterpart in the image. However, this assumption overlooks that off-screen sounds and background noise often contaminate the audio recordings in real-world scenarios. They impose significant challenges on building a consistent semantic mapping between audio and visual signals for AVS models and thus impede precise sound localization. In this work, we propose a two-stage bootstrapping audio-visual segmentation framework by incorporating multi-modal foundation knowledge. In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner. In the first stage, we employ a segmentation model to localize potential sounding objects from visual data without being affected by contaminated audio signals. Meanwhile, we also utilize a foundation audio classification model to discern audio semantics. Considering the audio tags provided by the audio foundation model are noisy, associating object masks with audio tags is not trivial. Thus, in the second stage, we develop an audio-visual semantic integration strategy (AVIS) to localize the authentic-sounding objects. Here, we construct an audio-visual tree based on the hierarchical correspondence between sounds and object categories. We then examine the label concurrency between the localized objects and classified audio tags by tracing the audio-visual tree. With AVIS, we can effectively segment real-sounding objects. Extensive experiments demonstrate the superiority of our method on AVS datasets, particularly in scenarios involving background noise. Our project website is https://yenanliu.github.io/AVSS.github.io/.

Chen Liu, Peike Li, Hu Zhang, Lincheng Li, Zi Huang, Dadong Wang, Xin Yu• 2023

Related benchmarks

TaskDatasetResultRank
Audio-Visual SegmentationAVSBench S4 v1 (test)
MJ82.7
55
Audio-Visual SegmentationAVSBench MS3 v1 (test)
Mean Jaccard59.6
37
Audio-Visual SegmentationAVSBench MS3 (test)
Jaccard Index (IoU)50.2
30
Audio-Visual Semantic SegmentationAVSBench AVSS v1 (test)
MJ33.6
29
Sound Target SegmentationAVSBench-object MS3 1.0 (test)
mIoU58.6
23
Audio-Visual SegmentationAVSBench AVS-Objects-S4
J&F Score86.2
21
Audio-Visual SegmentationAVSBench AVS-Objects-MS3
J & F Score62.8
21
Audio-Visual SegmentationAVS-Object S4
J&Fm86.2
19
Audio-Visual SegmentationAVS-Object MS3
J&Fm Combined Score62.8
19
Audio-Visual SegmentationAVSBench AVS-Semantic
J (Jaccard)33.6
13
Showing 10 of 13 rows

Other info

Follow for update