Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Seeing is Believing? Enhancing Vision-Language Navigation using Visual Perturbations

About

Autonomous navigation guided by natural language instructions in embodied environments remains a challenge for vision-language navigation (VLN) agents. Although recent advancements in learning diverse and fine-grained visual environmental representations have shown promise, the fragile performance improvements may not conclusively attribute to enhanced visual grounding,a limitation also observed in related vision-language tasks. In this work, we preliminarily investigate whether advanced VLN models genuinely comprehend the visual content of their environments by introducing varying levels of visual perturbations. These perturbations include ground-truth depth images, perturbed views and random noise. Surprisingly, we experimentally find that simple branch expansion, even with noisy visual inputs, paradoxically improves the navigational efficacy. Inspired by these insights, we further present a versatile Multi-Branch Architecture (MBA) designed to delve into the impact of both the branch quantity and visual quality. The proposed MBA extends a base agent into a multi-branch variant, where each branch processes a different visual input. This approach is embarrassingly simple yet agnostic to topology-based VLN agents. Extensive experiments on three VLN benchmarks (R2R, REVERIE, SOON) demonstrate that our method with optimal visual permutations matches or even surpasses state-of-the-art results. The source code is available at here.

Xuesong Zhang, Jia Li, Yunbo Xu, Zhenzhen Hu, Richang Hong• 2024

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationSOON (val unseen)
SPL29.6
25
Vision-and-Language NavigationSOON Unseen (test)
SR38.8
9
Showing 2 of 2 rows

Other info

Follow for update