Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

No RL, No Simulation: Learning to Navigate without Navigating

About

Most prior methods for learning navigation policies require access to simulation environments, as they need online policy interaction and rely on ground-truth maps for rewards. However, building simulators is expensive (requires manual effort for each and every scene) and creates challenges in transferring learned policies to robotic platforms in the real-world, due to the sim-to-real domain gap. In this paper, we pose a simple question: Do we really need active interaction, ground-truth maps or even reinforcement-learning (RL) in order to solve the image-goal navigation task? We propose a self-supervised approach to learn to navigate from only passive videos of roaming. Our approach, No RL, No Simulator (NRNS), is simple and scalable, yet highly effective. NRNS outperforms RL-based formulations by a significant margin. We present NRNS as a strong baseline for any future image-based navigation tasks that use RL or Simulation.

Meera Hahn, Devendra Chaplot, Shubham Tulsiani, Mustafa Mukadam, James M. Rehg, Abhinav Gupta• 2021

Related benchmarks

TaskDatasetResultRank
Image-Goal NavigationMP3D (test)
Success Rate9.3
19
Image-Goal NavigationGibson Curved trajectories (unseen)
Succ (Easy)35.5
12
Image-Goal NavigationHM3D (test)
Success Rate6.6
10
Image-Goal NavigationGibson Straight trajectories (unseen)
Success Rate (Easy)68
10
Image-Goal NavigationMP3D Straight Easy
Succ64.7
7
Image-Goal NavigationMP3D Medium Straight
Succ39.7
7
Image-Goal NavigationMP3D Straight (Hard)
Success Rate2.41e+3
7
Image-Goal NavigationMP3D Easy Curved
Succ23.7
7
Image-Goal NavigationMP3D Curved (Medium)
Success Rate16.2
7
Image-Goal NavigationMP3D Curved (Hard)
Succ10
7
Showing 10 of 17 rows

Other info

Follow for update