Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Q-learning with Nearest Neighbors

About

We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a $d$-dimensional state space and the discounted factor $\gamma \in (0,1)$, given an arbitrary sample path with "covering time" $ L $, we establish that the algorithm is guaranteed to output an $\varepsilon$-accurate estimate of the optimal Q-function using $\tilde{O}\big(L/(\varepsilon^3(1-\gamma)^7)\big)$ samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as $ \tilde{O}\big(1/\varepsilon^d\big),$ so the sample complexity scales as $\tilde{O}\big(1/\varepsilon^{d+3}\big).$ Indeed, we establish a lower bound that argues that the dependence of $ \tilde{\Omega}\big(1/\varepsilon^{d+2}\big)$ is necessary.

Devavrat Shah, Qiaomin Xie• 2018

Related benchmarks

TaskDatasetResultRank
Continuous ControlLunarLanderContinuous offline trajectories v2
Episodic Cumulative Reward-3.04
35
Continuous ControlBipedalWalker v3
Episodic Cumulative Reward-109.7
8
Offline ControlHeterogeneous Pendulum Low-Data 100,000 transition steps
Cumulative Reward-557.1
7
Offline ControlHeterogeneous Pendulum 300,000 transition steps (Mid-Data)
Cumulative Reward-670.5
7
Offline ControlHeterogeneous Pendulum Rich-Data 600,000 transition steps
Cumulative Reward-512.7
7
Showing 5 of 5 rows

Other info

Follow for update