Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Deep Reinforcement Learning with Successor Features for Navigation across Similar Environments

About

In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor feature based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.

Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, Wolfram Burgard• 2016

Related benchmarks

TaskDatasetResultRank
Generalization to Unseen ObjectsMiniGrid Case 1: Unseen Objects v1 (test)
Target Generalization Score9
6
Combined GeneralizationMiniGrid (Case 2)
TGT Score4.6
6
Language Conditioned TransferMiniGrid Reverse Task Case 3 (target)
Target Success Count (Case 3)12
6
Combined GeneralizationMiniWorld (Case 2)
TGT Score (Case 2)0.00e+0
6
Generalization to Unseen ObjectsMiniWorld Case 1: Unseen Objects v1 (test)
Target Score0.00e+0
6
Language Conditioned TransferMiniWorld Reverse Task Case 3 (target)
Target Success Count (Case 3)0.00e+0
6
Showing 6 of 6 rows

Other info

Follow for update