Denoising Particle Filters: Learning State Estimation with Single-Step Objectives
About
Learning-based methods commonly treat state estimation in robotics as a sequence modeling problem. While this paradigm can be effective at maximizing end-to-end performance, models are often difficult to interpret and expensive to train, since training requires unrolling sequences of predictions in time. As an alternative to end-to-end trained state estimation, we propose a novel particle filtering algorithm in which models are trained from individual state transitions, fully exploiting the Markov property in robotic systems. In this framework, measurement models are learned implicitly by minimizing a denoising score matching objective. At inference, the learned denoiser is used alongside a (learned) dynamics model to approximately solve the Bayesian filtering equation at each time step, effectively guiding predicted states toward the data manifold informed by measurements. We evaluate the proposed method on challenging robotic state estimation tasks in simulation, demonstrating competitive performance compared to tuned end-to-end trained baselines. Importantly, our method offers the desirable composability of classical filtering algorithms, allowing prior information and external sensor models to be incorporated without retraining.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| State estimation | Manipulator Spin (test) | M_IQM-1.1 | 5 | |
| State estimation | Multi-fingered Manipulation (test) | M_IQM3.1 | 5 | |
| State estimation | Cluttered Push ID (test) | M_IQM0.3 | 5 | |
| State estimation | Cluttered Push OOD (test) | M_IQM2.8 | 5 |