Solving Continuous Control with Episodic Memory
About
Episodic memory lets reinforcement learning algorithms remember and exploit promising experience from the past to improve agent performance. Previous works on memory mechanisms show benefits of using episodic-based data structures for discrete action problems in terms of sample-efficiency. The application of episodic memory for continuous control with a large action space is not trivial. Our study aims to answer the question: can episodic memory be used to improve agent's performance in continuous control? Our proposed algorithm combines episodic memory with Actor-Critic architecture by modifying critic's objective. We further improve performance by introducing episodic-based replay buffer prioritization. We evaluate our algorithm on OpenAI gym domains and show greater sample-efficiency compared with the state-of-the art model-free off-policy algorithms.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continuous Control | MuJoCo Ant | Average Reward833 | 26 | |
| Continuous Control | MuJoCo Walker2d | Max Return4.54e+3 | 13 | |
| Continuous Control | MuJoCo Humanoid | Average Reward3.81e+3 | 13 | |
| Continuous Control | MuJoCo Hopper | Maximum Average Return2.78e+3 | 13 |