Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gradient Informed Proximal Policy Optimization

About

We introduce a novel policy learning method that integrates analytical gradients from differentiable environments with the Proximal Policy Optimization (PPO) algorithm. To incorporate analytical gradients into the PPO framework, we introduce the concept of an {\alpha}-policy that stands as a locally superior policy. By adaptively modifying the {\alpha} value, we can effectively manage the influence of analytical policy gradients during learning. To this end, we suggest metrics for assessing the variance and bias of analytical gradients, reducing dependence on these gradients when high variance or bias is detected. Our proposed approach outperforms baseline algorithms in various scenarios, such as function optimization, physics simulations, and traffic control environments. Our code can be found online: https://github.com/SonSang/gippo.

Sanghyun Son, Laura Yu Zheng, Ryan Sullivan, Yi-Ling Qiao, Ming C. Lin• 2023

Related benchmarks

TaskDatasetResultRank
Function OptimizationAckley
Avg Max Reward-5.00e-4
12
Function OptimizationDejong
Average Maximum Reward-3.84e-10
5
Function OptimizationAckley 64
Avg Max Reward-0.0036
5
Function OptimizationDejong 64
Avg Max Reward-1.04e-6
5
Showing 4 of 4 rows

Other info

Code

Follow for update