Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Online algorithms for POMDPs with continuous state, action, and observation spaces

About

Online solvers for partially observable Markov decision processes have been applied to problems with large discrete state spaces, but continuous state, action, and observation spaces remain a challenge. This paper begins by investigating double progressive widening (DPW) as a solution to this challenge. However, we prove that this modification alone is not sufficient because the belief representations in the search tree collapse to a single particle causing the algorithm to converge to a policy that is suboptimal regardless of the computation time. This paper proposes and evaluates two new algorithms, POMCPOW and PFT-DPW, that overcome this deficiency by using weighted particle filtering. Simulation results show that these modifications allow the algorithms to be successful where previous approaches fail.

Zachary Sunberg, Mykel Kochenderfer• 2017

Related benchmarks

TaskDatasetResultRank
POMDP PlanningRockSample (15, 15)
Expected Return11.01
19
POMDP PlanningLightDark 10
Return1.08
15
POMDP PlanningRockSample (20, 20)
Expected Return9.92
10
POMDP PlanningMatterport3D Object Search (MOS) (5, 3)
Return7.5
6
POMDP PlanningRearrange (5, 2)
Return4.3
6
POMDP PlanningRockSample (25, 25)
Returns2.1
6
POMDP PlanningMOS (6,4)
Returns5.5
6
POMDP PlanningMOS(7,5)
Returns3.8
6
POMDP PlanningMOS (8,6)
Returns0.00e+0
6
POMDP PlanningRearrange (6,4)
Returns3
6
Showing 10 of 18 rows

Other info

Follow for update