Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Imitation-Projected Programmatic Reinforcement Learning

About

We study the problem of programmatic reinforcement learning, in which policies are represented as short programs in a symbolic language. Programmatic policies can be more interpretable, generalizable, and amenable to formal verification than neural policies; however, designing rigorous learning approaches for such policies remains a challenge. Our approach to this challenge -- a meta-algorithm called PROPEL -- is based on three insights. First, we view our learning task as optimization in policy space, modulo the constraint that the desired policy has a programmatic representation, and solve this optimization problem using a form of mirror descent that takes a gradient step into the unconstrained policy space and then projects back onto the constrained space. Second, we view the unconstrained policy space as mixing neural and programmatic representations, which enables employing state-of-the-art deep policy gradient approaches. Third, we cast the projection step as program synthesis via imitation learning, and exploit contemporary combinatorial methods for this task. We present theoretical convergence results for PROPEL and empirically evaluate the approach in three continuous control domains. The experiments show that PROPEL can significantly outperform state-of-the-art approaches for learning programmatic policies.

Abhinav Verma, Hoang M. Le, Yisong Yue, Swarat Chaudhuri• 2019

Related benchmarks

TaskDatasetResultRank
Autonomous RacingTORCS E-ROAD
Lap Time (s)79.39
6
Autonomous RacingTORCS G-Track
Lap Time (s)78.33
6
Autonomous RacingTORCS (AALBORG)
Lap Time (s)109.8
6
Showing 3 of 3 rows

Other info

Follow for update