Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline Behavior Distillation

About

Massive reinforcement learning (RL) data are typically collected to train policies offline without the need for interactions, but the large data volume can cause training inefficiencies. To tackle this issue, we formulate offline behavior distillation (OBD), which synthesizes limited expert behavioral data from sub-optimal RL data, enabling rapid policy learning. We propose two naive OBD objectives, DBC and PBC, which measure distillation performance via the decision difference between policies trained on distilled data and either offline data or a near-expert policy. Due to intractable bi-level optimization, the OBD objective is difficult to minimize to small values, which deteriorates PBC by its distillation performance guarantee with quadratic discount complexity $\mathcal{O}(1/(1-\gamma)^2)$. We theoretically establish the equivalence between the policy performance and action-value weighted decision difference, and introduce action-value weighted PBC (Av-PBC) as a more effective OBD objective. By optimizing the weighted decision difference, Av-PBC achieves a superior distillation guarantee with linear discount complexity $\mathcal{O}(1/(1-\gamma))$. Extensive experiments on multiple D4RL datasets reveal that Av-PBC offers significant improvements in OBD performance, fast distillation convergence speed, and robust cross-architecture/optimizer generalization.

Shiye Lei, Sen Zhang, Dacheng Tao• 2024

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL (various)
HalfCheetah-Medium36.9
16
Offline Behavior DistillationD4RL Halfcheetach (medium)
Normalized Return36.9
8
Offline Behavior DistillationD4RL hopper-medium-expert
Normalized Return38.7
8
Offline Behavior DistillationD4RL Walker2d medium
Normalized Return39.5
8
Offline Behavior DistillationD4RL walker2d-medium-expert
Normalized Return42.1
8
Offline Behavior DistillationD4RL Halfcheetach (medium-expert)
Normalized Return22
8
Offline Behavior DistillationD4RL Hopper medium
Normalized Return32.5
8
Showing 7 of 7 rows

Other info

Code

Follow for update