Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Q-Guided Stein Variational Model Predictive Control via RL-informed Policy Prior

About

Model Predictive Control (MPC) enables reliable trajectory optimization under dynamics constraints, but often depends on accurate dynamics models and carefully hand-designed cost functions. Recent learning-based MPC methods aim to reduce these modeling and cost-design burdens by learning dynamics, priors, or value-related guidance signals. Yet many existing approaches still rely on deterministic gradient-based solvers (e.g., differentiable MPC) or parametric sampling-based updates (e.g., CEM/MPPI), which can lead to mode collapse and convergence to a single dominant solution. We propose Q-SVMPC, a Q-guided Stein variational MPC method with an RL-informed policy prior, which casts learning-based MPC as trajectory-level posterior inference and refines trajectory particles via SVGD under learned soft Q-value guidance to explicitly preserve diverse solutions. Experiments on navigation, robotic manipulation, and a real-world fruit-picking task show improved sample efficiency, stability, and robustness over MPC, model-free RL, and learning-based MPC baselines.

Shizhe Cai, Zeya Yin, Jayadeep Jacob, Fabio Ramos• 2025

Related benchmarks

TaskDatasetResultRank
ReachReach Obstacles
Collision Rate (%)0.0242
7
Navigation2D Navigation
Collision Rate5.49
7
ReachKinova manipulation suite
SR@100%89.3
7
Reach (Obstacles)Kinova manipulation suite
SR @ 100%82.6
5
Obstacle AvoidanceKinova Real-World
Success Rate93.3
3
Pick-&-PlaceKinova manipulation suite
Success Rate @ 75% Threshold91.4
3
Target ReachingKinova Real-World
Success Rate80
3
Showing 7 of 7 rows

Other info

Follow for update