Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Doubly Robust Off-policy Value Evaluation for Reinforcement Learning

About

We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL in real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator's accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.

Nan Jiang, Lihong Li• 2015

Related benchmarks

TaskDatasetResultRank
Offline Policy EvaluationD4RL walker2d medium-replay
RMSE155.3
7
Offline Policy EvaluationD4RL HalfCheetah Medium-Replay
RMSE119.5
7
Offline Policy EvaluationD4RL Halfcheetah medium
RMSE145.2
7
Offline Policy EvaluationD4RL Walker2d medium
RMSE232.1
7
Offline Policy EvaluationD4RL Hopper medium
RMSE16.5
7
Offline Policy EvaluationD4RL hopper medium-replay
RMSE112.7
7
Showing 6 of 6 rows

Other info

Follow for update