Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble

About

Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets. However, depending on the quality of the trained agents and the application being considered, it is often desirable to fine-tune such agents via further online interactions. In this paper, we observe that state-action distribution shift may lead to severe bootstrap error during fine-tuning, which destroys the good initial policy obtained via offline RL. To address this issue, we first propose a balanced replay scheme that prioritizes samples encountered online while also encouraging the use of near-on-policy samples from the offline dataset. Furthermore, we leverage multiple Q-functions trained pessimistically offline, thereby preventing overoptimism concerning unfamiliar actions at novel states during the initial training phase. We show that the proposed method improves sample-efficiency and final performance of the fine-tuned robotic agents on various locomotion and manipulation tasks. Our code is available at: https://github.com/shlee94/Off2OnRL.

Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin• 2021

Related benchmarks

TaskDatasetResultRank
hopper locomotionD4RL hopper medium-replay
Normalized Score103.6
56
walker2d locomotionD4RL walker2d medium-replay--
53
LocomotionD4RL walker2d-medium-expert
Normalized Score118
47
LocomotionD4RL Halfcheetah medium--
44
LocomotionD4RL Walker2d medium--
44
LocomotionD4RL halfcheetah-medium-expert
Normalized Score98.33
37
LocomotionD4RL HalfCheetah Medium-Replay
Normalized Score0.8874
33
LocomotionD4RL hopper-medium-expert
Normalized Score (100k Steps)99.47
18
LocomotionD4RL Hopper medium
Normalized Score90.34
14
LocomotionD4RL Halfcheetah-expert
Normalized Score (100k steps)101
3
Showing 10 of 12 rows

Other info

Follow for update