JAEGER: Joint 3D Audio-Visual Grounding and Reasoning in Simulated Physical Environments
About
Current audio-visual large language models (AV-LLMs) are predominantly restricted to 2D perception, relying on RGB video and monaural audio. This design choice introduces a fundamental dimensionality mismatch that precludes reliable source localization and spatial reasoning in complex 3D environments. We address this limitation by presenting JAEGER, a framework that extends AV-LLMs to 3D space, to enable joint spatial grounding and reasoning through the integration of RGB-D observations and multi-channel first-order ambisonics. A core contribution of our work is the neural intensity vector (Neural IV), a learned spatial audio representation that encodes robust directional cues to enhance direction-of-arrival estimation, even in adverse acoustic scenarios with overlapping sources. To facilitate large-scale training and systematic evaluation, we propose SpatialSceneQA, a benchmark of 61k instruction-tuning samples curated from simulated physical environments. Extensive experiments demonstrate that our approach consistently surpasses 2D-centric baselines across diverse spatial perception and reasoning tasks, underscoring the necessity of explicit 3D modelling for advancing AI in physical environments. Our source code, pre-trained model checkpoints and datasets will be released upon acceptance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Visual Grounding | Simulated Physical Environments Task C | 3D IoU0.32 | 6 | |
| Overlap Audio DoA | Simulated Physical Environments Task B | MAE (Degrees)13.13 | 4 | |
| Reasoning 1-speaker | Simulated Physical Environments Task D | Accuracy99.5 | 4 | |
| Reasoning 2-speaker | Simulated Physical Environments Task E | Accuracy99.2 | 4 | |
| Audio DoA | Simulated Physical Environments Task A | MAE2.21 | 4 |