Search and Explore: Symbiotic Policy Synthesis in POMDPs
About
This paper marries two state-of-the-art controller synthesis methods for partially observable Markov decision processes (POMDPs), a prominent model in sequential decision making under uncertainty. A central issue is to find a POMDP controller - that solely decides based on the observations seen so far - to achieve a total expected reward objective. As finding optimal controllers is undecidable, we concentrate on synthesising good finite-state controllers (FSCs). We do so by tightly integrating two modern, orthogonal methods for POMDP controller synthesis: a belief-based and an inductive approach. The former method obtains an FSC from a finite fragment of the so-called belief MDP, an MDP that keeps track of the probabilities of equally observable POMDP states. The latter is an inductive search technique over a set of FSCs, e.g., controllers with a fixed memory size. The key result of this paper is a symbiotic anytime algorithm that tightly integrates both approaches such that each profits from the controllers constructed by the other. Experimental results indicate a substantial improvement in the value of the controllers while significantly reducing the synthesis time and memory footprint.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| POMDP Planning | maze-10 POMDP PRISM format (original enlarged) | Value (IQM)8.59 | 4 | |
| POMDP Planning | rocks-16 POMDP PRISM format (original/enlarged) | IQM Value-36.91 | 4 | |
| POMDP Planning | network-3-8-20 POMDP PRISM format (original enlarged) | Value (IQM)-10.45 | 4 | |
| POMDP Planning | network-5-10-8 POMDP PRISM format (original enlarged) | Value (IQM)-16.12 | 4 | |
| POMDP Planning | intercept-16 POMDP PRISM format (original enlarged) | Value (IQM)0.8 | 4 | |
| POMDP Planning | evade-n17 POMDP PRISM format (original enlarged) | Value (IQM)0.58 | 4 | |
| POMDP Planning | drone-2-8-1 POMDP original enlarged PRISM format | Value (IQM)0.4 | 4 |