Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Generate, Analyze, and Refine: Training-Free Sound Source Localization via MLLM Meta-Reasoning

About

Sound source localization task aims to identify the locations of sound-emitting objects by leveraging correlations between audio and visual modalities. Most existing SSL methods rely on contrastive learning-based feature matching, but lack explicit reasoning and verification, limiting their effectiveness in complex acoustic scenes. Inspired by human meta-cognitive processes, we propose a training-free SSL framework that exploits the intrinsic reasoning capabilities of Multimodal Large Language Models (MLLMs). Our Generation-Analysis-Refinement (GAR) pipeline consists of three stages: Generation produces initial bounding boxes and audio classifications; Analysis quantifies Audio-Visual Consistency via open-set role tagging and anchor voting; and Refinement applies adaptive gating to prevent unnecessary adjustments. Extensive experiments on single-source and multi-source benchmarks demonstrate competitive performance. The source code is available at https://github.com/VisualAIKHU/GAR-SSL.

Subin Park, Jung Uk Kim• 2026

Related benchmarks

TaskDatasetResultRank
Single-source sound localizationVGGSound single-source (test)
IoU@0.560.2
39
Multi-sound source localizationMUSIC-Duet (test)
CIoU@0.382.7
37
Multi-sound source localizationVGGSound-Duet (test)
CIoU@0.377.6
37
Single Sound Source LocalizationMUSIC-Solo (test)
IoU@0.598.5
26
Showing 4 of 4 rows

Other info

Follow for update