Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Off-Policy Evaluation via the Regularized Lagrangian

About

The recently proposed distribution correction estimation (DICE) family of estimators has advanced the state of the art in off-policy evaluation from behavior-agnostic data. While these estimators all perform some form of stationary distribution correction, they arise from different derivations and objective functions. In this paper, we unify these estimators as regularized Lagrangians of the same linear program. The unification allows us to expand the space of DICE estimators to new alternatives that demonstrate improved performance. More importantly, by analyzing the expanded space of estimators both mathematically and empirically we find that dual solutions offer greater flexibility in navigating the tradeoff between optimization stability and estimation bias, and generally provide superior estimates in practice.

Mengjiao Yang, Ofir Nachum, Bo Dai, Lihong Li, Dale Schuurmans• 2020

Related benchmarks

TaskDatasetResultRank
Offline Policy EvaluationD4RL HalfCheetah Medium-Replay
RMSE567.9
7
Offline Policy EvaluationD4RL hopper medium-replay
RMSE1.57e+3
7
Offline Policy EvaluationD4RL Hopper medium
RMSE368.6
7
Offline Policy EvaluationD4RL Halfcheetah medium
RMSE3.45e+3
7
Offline Policy EvaluationD4RL walker2d medium-replay
RMSE2.12e+3
7
Offline Policy EvaluationD4RL Walker2d medium
RMSE1.76e+3
7
Showing 6 of 6 rows

Other info

Follow for update