VLD: Visual Language Goal Distance for Reinforcement Learning Navigation
About
Training end-to-end policies from image data to directly predict navigation actions for robotic systems has proven inherently difficult. Existing approaches often suffer from either the sim-to-real gap during policy transfer or a limited amount of training data with action labels. To address this problem, we introduce Vision-Language Distance (VLD) learning, a scalable framework for goal-conditioned navigation that decouples perception learning from policy learning. Instead of relying on raw sensory inputs during policy training, we first train a self-supervised distance-to-goal predictor on internet-scale video data. This predictor generalizes across both image- and text-based goals, providing a distance signal that can be minimized by a reinforcement learning (RL) policy. The RL policy can be trained entirely in simulation using privileged geometric distance signals, with injected noise to mimic the uncertainty of the trained distance predictor. At deployment, the policy consumes VLD predictions, inheriting semantic goal information-"where to go"-from large-scale visual training while retaining the robust low-level navigation behaviors learned in simulation. We propose using ordinal consistency to assess distance functions directly and demonstrate that VLD outperforms prior temporal distance approaches, such as ViNT and VIP. Experiments show that our decoupled design achieves competitive navigation performance in simulation while supporting flexible goal modalities, providing an alternative and, most importantly, scalable path toward reliable, multimodal navigation policies.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| ObjectNav | Gibson (val) | Success Rate73.14 | 18 | |
| Ordinal Consistency | HM3D | Kendall Tau (20 steps)0.81 | 8 | |
| Ordinal Consistency | In-the-wild 50 steps horizon v1 (test) | Kendall's Tau0.69 | 8 | |
| Ordinal Consistency | In-the-wild 100 steps horizon v1 (test) | Kendall's Tau0.61 | 8 | |
| Ordinal Consistency | Embodiment 50 steps horizon v1 (test) | Kendall's Tau0.73 | 8 | |
| Ordinal Consistency | Embodiment 100 steps horizon v1 (test) | Kendall’s Tau0.63 | 8 | |
| Ordinal Consistency Evaluation | HM3D Habitat (val) | Kendall's Tau (20 steps)0.82 | 8 | |
| Ordinal Consistency Evaluation | Gibson Habitat (val) | Kendall’s τ (20 steps)0.84 | 8 |