Sound Value Iteration for Simple Stochastic Games
About
Algorithmic analysis of Markov decision processes (MDP) and stochastic games (SG) in practice relies on value-iteration (VI) algorithms. Since the basic version of VI does not provide guarantees on the precision of the result, variants of VI have been proposed that offer such guarantees. In particular, sound value iteration (SVI) not only provides precise lower and upper bounds on the result, but also converges faster in the presence of probabilistic cycles. Unfortunately, it is neither applicable to SG, nor to MDP with end components. In this paper, we extend SVI and cover both cases. The technical challenge consists mainly in proper treatment of end components, which require different handling than in the literature. Moreover, we provide several optimizations of SVI. Finally, we also evaluate our prototype implementation experimentally to confirm its advantages on systems with probabilistic cycles.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Value Iteration | consensus c2 | Iterations8.72e+3 | 2 | |
| Value Iteration | consensus disagree | Iterations8.75e+3 | 2 | |
| Value Iteration | wlan | Iterations271 | 2 | |
| Value Iteration | zeroconf correct max | Iterations37 | 2 | |
| Value Iteration | zeroconf correct min | Iterations22 | 2 | |
| Value Iteration | dl deadline mn zeroconf | Iterations28 | 2 | |
| Value Iteration | zeroconf_dl_deadline_mx | Iterations1 | 2 | |
| Value Iteration | csma all before max | Iterations16 | 2 | |
| Value Iteration | csma all before min | Iterations16 | 2 | |
| Value Iteration | csma some before | Iterations6 | 2 |