Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Synthetic Experience Replay

About

A key theme in the past decade has been that when large neural networks and large datasets combine they can produce remarkable results. In deep reinforcement learning (RL), this paradigm is commonly made possible through experience replay, whereby a dataset of past experiences is used to train a policy or value function. However, unlike in supervised or self-supervised learning, an RL agent has to collect its own data, which is often limited. Thus, it is challenging to reap the benefits of deep learning, and even small neural networks can overfit at the start of training. In this work, we leverage the tremendous recent progress in generative modeling and propose Synthetic Experience Replay (SynthER), a diffusion-based approach to flexibly upsample an agent's collected experience. We show that SynthER is an effective method for training RL agents across offline and online settings, in both proprioceptive and pixel-based environments. In offline settings, we observe drastic improvements when upsampling small offline datasets and see that additional synthetic data also allows us to effectively train larger networks. Furthermore, SynthER enables online agents to train with a much higher update-to-data ratio than before, leading to a significant increase in sample efficiency, without any algorithmic changes. We believe that synthetic training data could open the door to realizing the full potential of deep learning for replay-based RL algorithms from limited data. Finally, we open-source our code at https://github.com/conglu1997/SynthER.

Cong Lu, Philip J. Ball, Yee Whye Teh, Jack Parker-Holder• 2023

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL AntMaze--
65
Offline Reinforcement LearningD4RL Locomotion medium, medium-replay, medium-expert v2
Score (HalfCheetah, Medium)63.57
34
Offline Reinforcement LearningOGBench Manipulation Play
Scene-v0 Score92
8
Offline Reinforcement LearningVD4RL Cheetah-run pixel-based (medium-replay)
Normalized Score44.8
8
Offline Reinforcement LearningOGBench Maze Stitch
ant-large-v031.1
8
NavigationD4RL Maze Tasks v2 (umaze, medium, large, diverse, play)
Maze2d UMaze Score39
4
Offline Reinforcement LearningVD4RL Cheetah-run pixel-based (medium)
Normalized Score53.3
3
Offline Reinforcement LearningVD4RL Walker-walk pixel-based (medium)
Normalized Score40.1
3
Offline Reinforcement LearningVD4RL Cheetah-run pixel-based (medium-expert)
Normalized Score50.6
3
Offline Reinforcement LearningVD4RL Cheetah-run pixel-based (expert)
Normalized Score34.5
3
Showing 10 of 13 rows

Other info

Follow for update