NaVILA: Legged Robot Vision-Language-Action Model for Navigation
About
This paper proposes to solve the problem of Vision-and-Language Navigation with legged robots, which not only provides a flexible way for humans to command but also allows the robot to navigate through more challenging and cluttered scenes. However, it is non-trivial to translate human language instructions all the way to low-level leg joint actions. We propose NaVILA, a 2-level framework that unifies a Vision-Language-Action model (VLA) with locomotion skills. Instead of directly predicting low-level actions from VLA, NaVILA first generates mid-level actions with spatial information in the form of language, (e.g., "moving forward 75cm"), which serves as an input for a visual locomotion RL policy for execution. NaVILA substantially improves previous approaches on existing benchmarks. The same advantages are demonstrated in our newly developed benchmarks with IsaacLab, featuring more realistic scenes, low-level controls, and real-world robot experiments. We show more results at https://navila-bot.github.io/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Vision-Language Navigation | R2R-CE (val-unseen) | Success Rate (SR)54 | 266 | |
| Vision-and-Language Navigation | R2R (val unseen) | Success Rate (SR)37 | 260 | |
| Vision-Language Navigation | RxR-CE (val-unseen) | SR49.3 | 172 | |
| Vision-and-Language Navigation | R2R-CE (val-seen) | SR58 | 49 | |
| Embodied Navigation | R2R-CE | Navigation Error (NE)5.22 | 19 | |
| Vision-and-Language Navigation | R2R-CE v1.0 (val unseen) | NE (Navigation Error)5.22 | 19 | |
| 3D Question Answering | ScanQA v1.0 (val) | BLEU-415.2 | 13 | |
| Robot navigation | DynaNav | Navigation Error17.2 | 9 | |
| Robot navigation | Real-world Navigation Tasks v1 (test) | Success Rate35 | 6 | |
| Vision-and-Language Navigation | EgoActor Virtual Benchmark VLNCE unseen (test) | Success Rate (< 0.5m)8.3 | 5 |