Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generalized Proximal Policy Optimization with Sample Reuse

About

In real-world decision making tasks, it is critical for data-driven reinforcement learning methods to be both stable and sample efficient. On-policy methods typically generate reliable policy improvement throughout training, while off-policy methods make more efficient use of data through sample reuse. In this work, we combine the theoretically supported stability benefits of on-policy algorithms with the sample efficiency of off-policy algorithms. We develop policy improvement guarantees that are suitable for the off-policy setting, and connect these bounds to the clipping mechanism used in Proximal Policy Optimization. This motivates an off-policy version of the popular algorithm that we call Generalized Proximal Policy Optimization with Sample Reuse. We demonstrate both theoretically and empirically that our algorithm delivers improved performance by effectively balancing the competing goals of stability and sample efficiency.

James Queeney, Ioannis Ch. Paschalidis, Christos G. Cassandras• 2021

Related benchmarks

TaskDatasetResultRank
Continuous control locomotionMuJoCo Swimmer v3 (train)
Average Performance161
2
Continuous control locomotionMuJoCo Hopper v3 (train)
Avg Performance (1M Steps)2.54e+3
2
Continuous control locomotionMuJoCo HalfCheetah v3 (train)
Avg Performance (1M Steps)2.44e+3
2
Continuous control locomotionMuJoCo Walker2d v3 (train)
Avg Return (1M Steps)2.20e+3
2
Continuous control locomotionMuJoCo Ant v3 (train)
Avg Performance (1M Steps)762
2
Continuous control locomotionMuJoCo Humanoid v3 (train)
Avg Performance (1M Steps)665
2
Showing 6 of 6 rows

Other info

Code

Follow for update