DESPOT: Online POMDP Planning with Regularization
About
The partially observable Markov decision process (POMDP) provides a principled general framework for planning under uncertainty, but solving POMDPs optimally is computationally intractable, due to the "curse of dimensionality" and the "curse of history". To overcome these challenges, we introduce the Determinized Sparse Partially Observable Tree (DESPOT), a sparse approximation of the standard belief tree, for online planning under uncertainty. A DESPOT focuses online planning on a set of randomly sampled scenarios and compactly captures the "execution" of all policies under these scenarios. We show that the best policy obtained from a DESPOT is near-optimal, with a regret bound that depends on the representation size of the optimal policy. Leveraging this result, we give an anytime online planning algorithm, which searches a DESPOT for a policy that optimizes a regularized objective function. Regularization balances the estimated value of a policy under the sampled scenarios and the policy size, thus avoiding overfitting. The algorithm demonstrates strong experimental results, compared with some of the best online POMDP algorithms available. It has also been incorporated into an autonomous driving system for real-time vehicle control. The source code for the algorithm is available online.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| POMDP Planning | RockSample (15, 15) | Expected Return18.83 | 19 | |
| Multi-Agent Rock Sample (POMDP) | MARS (20, 20) | Average Discounted Reward27.9 | 18 | |
| Robot navigation | Navigation | Average Total Discounted Reward8.7 | 16 | |
| POMDP Planning | LightDark 10 | Return0.73 | 15 | |
| POMDP Planning | RockSample (20, 20) | Expected Return0.00e+0 | 10 | |
| POMDP Planning | Matterport3D Object Search (MOS) (5, 3) | Return6.4 | 6 | |
| POMDP Planning | Rearrange (5, 2) | Return3.4 | 6 | |
| POMDP Planning | RockSample (25, 25) | Returns0.00e+0 | 6 | |
| POMDP Planning | MOS (6,4) | Returns4.8 | 6 | |
| POMDP Planning | MOS(7,5) | Returns3.2 | 6 |