Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Batch Value-function Approximation with Only Realizability

About

We make progress in a long-standing problem of batch reinforcement learning (RL): learning $Q^\star$ from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen and Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.

Tengyang Xie, Nan Jiang• 2020

Related benchmarks

TaskDatasetResultRank
Offline Policy EvaluationD4RL Hopper medium
RMSE16.4
7
Offline Policy EvaluationD4RL hopper medium-replay
RMSE61.2
7
Offline Policy EvaluationD4RL HalfCheetah Medium-Replay
RMSE140.2
7
Offline Policy EvaluationD4RL Halfcheetah medium
RMSE166.6
7
Offline Policy EvaluationD4RL Walker2d medium
RMSE264.1
7
Offline Reinforcement LearningD4RL HalfCheetah v2 (random)
Average True Return1.11e+3
7
Offline Reinforcement LearningD4RL HalfCheetah medium-expert v2
Avg True Return8.80e+3
7
Offline Policy EvaluationD4RL walker2d medium-replay
RMSE221.5
7
Showing 8 of 8 rows

Other info

Follow for update