DDAVS: Disentangled Audio Semantics and Delayed Bidirectional Alignment for Audio-Visual Segmentation
About
Audio-Visual Segmentation (AVS) aims to localize sound-producing objects at the pixel level by jointly leveraging auditory and visual information. However, existing methods often suffer from multi-source entanglement and audio-visual misalignment, which lead to biases toward louder or larger objects while overlooking weaker, smaller, or co-occurring sources. To address these challenges, we propose DDAVS, a Disentangled Audio Semantics and Delayed Bidirectional Alignment framework. To mitigate multi-source entanglement, DDAVS employs learnable queries to extract audio semantics and anchor them within a structured semantic space derived from an audio prototype memory bank. This is further optimized through contrastive learning to enhance discriminability and robustness. To alleviate audio-visual misalignment, DDAVS introduces dual cross-attention with delayed modality interaction, improving the robustness of multimodal alignment. Extensive experiments on the AVS-Objects and VPO benchmarks demonstrate that DDAVS consistently outperforms existing approaches, exhibiting strong performance across single-source, multi-source, and multi-instance scenarios. These results validate the effectiveness and generalization ability of our framework under challenging real-world audio-visual segmentation conditions. Project page: https://trilarflagz.github.io/DDAVS-page/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio-Visual Segmentation | AVSBench AVS-Objects-S4 | J&F Score92.4 | 21 | |
| Audio-Visual Segmentation | AVSBench AVS-Objects-MS3 | J & F Score75.1 | 21 | |
| Audio-Visual Segmentation | VPO-SS 1.0 (test) | J & FB Score74.8 | 16 | |
| Audio-Visual Segmentation | AVSBench AVS-Semantic | J (Jaccard)49.7 | 13 | |
| Audio-Visual Segmentation | VPO-MS | J&F Score76.11 | 8 | |
| Audio-Visual Segmentation | VPO-MSMI | J&F Score72.84 | 8 |