Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning

About

In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods---it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang and Li, 2015), and a new way to mix between model based estimates and importance sampling based estimates.

Philip S. Thomas, Emma Brunskill• 2016

Related benchmarks

TaskDatasetResultRank
Offline Policy SelectionSepsis Simulator simulated (test)
AE0.995
18
Showing 1 of 1 rows

Other info

Follow for update