Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning From Failures: Efficient Reinforcement Learning Control with Episodic Memory

About

Reinforcement learning has achieved remarkable success in robot learning. However, under challenging exploration and contact-rich dynamics, early-stage training is frequently dominated by premature terminations such as collisions and falls. As a result, learning is overwhelmed by short-horizon, low-return trajectories, which hinder convergence and limit long-horizon exploration. To alleviate this issue, we propose a technique called Failure Episodic Memory Alert (FEMA). FEMA explicitly stores short-horizon failure experiences through an episodic memory module. During interactions, it retrieves similar failure experiences and prevents the robot from recurrently relapsing into unstable states, guiding the policy toward long-horizon trajectories with greater long-term value. FEMA can be combined easily with model-free reinforcement learning algorithms, and yields a substantial sample-efficiency improvement of 33.11% on MuJoCo tasks across several classical RL algorithms. Furthermore, integrating FEMA into a parallelized PPO training pipeline demonstrates its effectiveness on a real-world bipedal robot task.

Chenyang Miao• 2026

Related benchmarks

TaskDatasetResultRank
Continuous ControlMuJoCo Ant
Average Reward5.08e+3
26
Continuous ControlMuJoCo Humanoid
Average Reward6.97e+3
13
Continuous ControlMuJoCo Walker2d
Max Return5.03e+3
13
Continuous ControlMuJoCo Hopper
Maximum Average Return3.48e+3
13
Showing 4 of 4 rows

Other info

Follow for update