Batch Value-function Approximation with Only Realizability
About
We make progress in a long-standing problem of batch reinforcement learning (RL): learning $Q^\star$ from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen and Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Policy Evaluation | D4RL Hopper medium | RMSE16.4 | 7 | |
| Offline Policy Evaluation | D4RL hopper medium-replay | RMSE61.2 | 7 | |
| Offline Policy Evaluation | D4RL HalfCheetah Medium-Replay | RMSE140.2 | 7 | |
| Offline Policy Evaluation | D4RL Halfcheetah medium | RMSE166.6 | 7 | |
| Offline Policy Evaluation | D4RL Walker2d medium | RMSE264.1 | 7 | |
| Offline Reinforcement Learning | D4RL HalfCheetah v2 (random) | Average True Return1.11e+3 | 7 | |
| Offline Reinforcement Learning | D4RL HalfCheetah medium-expert v2 | Avg True Return8.80e+3 | 7 | |
| Offline Policy Evaluation | D4RL walker2d medium-replay | RMSE221.5 | 7 |