Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Denoising Particle Filters: Learning State Estimation with Single-Step Objectives

About

Learning-based methods commonly treat state estimation in robotics as a sequence modeling problem. While this paradigm can be effective at maximizing end-to-end performance, models are often difficult to interpret and expensive to train, since training requires unrolling sequences of predictions in time. As an alternative to end-to-end trained state estimation, we propose a novel particle filtering algorithm in which models are trained from individual state transitions, fully exploiting the Markov property in robotic systems. In this framework, measurement models are learned implicitly by minimizing a denoising score matching objective. At inference, the learned denoiser is used alongside a (learned) dynamics model to approximately solve the Bayesian filtering equation at each time step, effectively guiding predicted states toward the data manifold informed by measurements. We evaluate the proposed method on challenging robotic state estimation tasks in simulation, demonstrating competitive performance compared to tuned end-to-end trained baselines. Importantly, our method offers the desirable composability of classical filtering algorithms, allowing prior information and external sensor models to be incorporated without retraining.

Lennart R\"ostel, Berthold B\"auml• 2026

Related benchmarks

TaskDatasetResultRank
State estimationManipulator Spin (test)
M_IQM-1.1
5
State estimationMulti-fingered Manipulation (test)
M_IQM3.1
5
State estimationCluttered Push ID (test)
M_IQM0.3
5
State estimationCluttered Push OOD (test)
M_IQM2.8
5
Showing 4 of 4 rows

Other info

Follow for update