Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning

About

Offline reinforcement learning algorithms promise to be applicable in settings where a fixed dataset is available and no new experience can be acquired. However, such formulation is inevitably offline-data-hungry and, in practice, collecting a large offline dataset for one specific task over one specific environment is also costly and laborious. In this paper, we thus 1) formulate the offline dynamics adaptation by using (source) offline data collected from another dynamics to relax the requirement for the extensive (target) offline data, 2) characterize the dynamics shift problem in which prior offline methods do not scale well, and 3) derive a simple dynamics-aware reward augmentation (DARA) framework from both model-free and model-based offline settings. Specifically, DARA emphasizes learning from those source transition pairs that are adaptive for the target environment and mitigates the offline dynamics shift by characterizing state-action-next-state pairs instead of the typical state-action distribution sketched by prior offline RL methods. The experimental evaluation demonstrates that DARA, by augmenting rewards in the source offline dataset, can acquire an adaptive policy for the target environment and yet significantly reduce the requirement of target offline data. With only modest amounts of target offline data, our performance consistently outperforms the prior offline RL methods in both simulated and real-world tasks.

Jinxin Liu, Hongyin Zhang, Donglin Wang• 2022

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score59.2
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score38.2
115
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score53.5
72
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return43.4
67
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score45.6
59
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score28.9
59
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score21.6
58
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score25
58
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score85.5
56
Offline Reinforcement LearningD4RL hopper-expert v2
Normalized Score59.1
56
Showing 10 of 97 rows
...

Other info

Follow for update