Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection

About

Vision Language Models (VLMs) excel at visual question answering (VQA) but remain limited to snapshot vision, reasoning from static images. In contrast, embodied agents require ambulatory vision, actively moving to obtain more informative views. We introduce Visually Grounded Active View Selection (VG-AVS), a task that selects the most informative next viewpoint using only the visual information in the current image, without relying on scene memory or external knowledge. To support this task, we construct a synthetic dataset with automatically generated paired query-target views and question-answer prompts. We also propose a framework that fine-tunes pretrained VLMs through supervised fine-tuning (SFT) followed by RL-based policy optimization. Our approach achieves strong question answering performance based on viewpoint selection and generalizes robustly to unseen synthetic and real scenes. Furthermore, incorporating our learned VG-AVS framework into existing scene-exploration-based EQA systems improves downstream question-answering accuracy.

Juil Koo, Daehyeon Choi, Sangwoo Youn, Phillip Y. Lee, Minhyuk Sung• 2025

Related benchmarks

TaskDatasetResultRank
Visually-Grounded Active View SelectionAVS-ProcTHOR (val)
Existence Score91.47
11
Visually-Grounded Active View SelectionAVS-HM3D (val)
Existence81.25
11
Active Visual SearchSAT (synthetic)
Accuracy69.33
4
Active Visual SearchSAT (real)
Accuracy77.33
4
Embodied Question AnsweringOpen-EQA (test)
Object Recognition52.8
4
Embodied Question AnsweringFine-EQA
LLM-Match (Attr)59.08
4
Showing 6 of 6 rows

Other info

GitHub

Follow for update