Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Universal Successor Features Approximators

About

The ability of a reinforcement learning (RL) agent to learn about many reward functions at the same time has many potential benefits, such as the decomposition of complex tasks into simpler ones, the exchange of information between tasks, and the reuse of skills. We focus on one aspect in particular, namely the ability to generalise to unseen tasks. Parametric generalisation relies on the interpolation power of a function approximator that is given the task description as input; one of its most common form are universal value function approximators (UVFAs). Another way to generalise to new tasks is to exploit structure in the RL problem itself. Generalised policy improvement (GPI) combines solutions of previous tasks into a policy for the unseen task; this relies on instantaneous policy evaluation of old policies under the new reward function, which is made possible through successor features (SFs). Our proposed universal successor features approximators (USFAs) combine the advantages of all of these, namely the scalability of UVFAs, the instant inference of SFs, and the strong generalisation of GPI. We discuss the challenges involved in training a USFA, its generalisation properties and demonstrate its practical benefits and transfer abilities on a large-scale domain in which the agent has to navigate in a first-person perspective three-dimensional environment.

Diana Borsa, Andr\'e Barreto, John Quan, Daniel Mankowitz, R\'emi Munos, Hado van Hasselt, David Silver, Tom Schaul• 2018

Related benchmarks

TaskDatasetResultRank
Offline multitask Reinforcement LearningFranka Kitchen kitchen-mixed
Average Episodic Return10
23
Offline multitask Reinforcement LearningFranka Kitchen kitchen-partial
Average Episodic Return0.00e+0
13
Reinforcement LearningHopper (forward)
Average Episodic Return746
12
Offline multitask Reinforcement LearningHopper backward
Average Episodic Return261
12
Reinforcement LearningAntMaze large-play D4RL
Average Episodic Return250
8
Reinforcement LearningAntMaze medium-diverse D4RL
Avg Episodic Return394
8
Reinforcement LearningAntMaze medium-play D4RL
Average Episodic Return370
8
Reinforcement LearningAntMaze large-diverse D4RL
Average Episodic Return215
8
Reinforcement LearningAntMaze umaze D4RL
Average Episodic Return462
8
Reinforcement LearningAntMaze umaze-diverse D4RL
Average Episodic Return447
8
Showing 10 of 20 rows

Other info

Follow for update