Improving Pre-Trained Vision-Language-Action Policies with Model-Based Search
About
Pre-trained vision-language-action (VLA) models offer a promising foundation for generalist robot policies, but often produce brittle behaviors or unsafe failures when deployed zero-shot in out-of-distribution scenarios. We present Vision-Language-Action Planning & Search (VLAPS) -- a novel framework and accompanying algorithms that embed model-based search into the inference procedure of pre-trained VLA policies to improve their performance on robotic tasks. Specifically, our method biases a modified Monte Carlo Tree Search (MCTS) algorithm -- run using a model of the target environment -- using action priors defined by the VLA policy. By using VLA-derived abstractions and priors in model-based search, VLAPS efficiently explores language-conditioned robotics tasks whose search spaces would otherwise be intractably large. Conversely, by integrating model-based search with the VLA policy's inference procedure, VLAPS yields behaviors that are more performant than those obtained by directly following the VLA policy's action predictions. VLAPS offers a principled framework to: i) control test-time compute in VLA models, ii) leverage a priori knowledge of the robotic environment, and iii) integrate established planning and reinforcement learning techniques into the VLA inference process. Across all experiments, VLAPS significantly outperforms VLA-only baselines on language-specified tasks that would otherwise be intractable for uninformed search algorithms, increasing success rates by as much as 67 percentage points.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Planning | LIBERO Spatial Suite | Average MCTS Simulation24.81 | 33 | |
| Planning | LIBERO Object Suite | Average MCTS Simulations28.99 | 33 | |
| Robot Manipulation | LIBERO Spatial suite (test) | Task 0 Success Rate100 | 4 | |
| Robot Manipulation | LIBERO Object suite (test) | Task 0 Success Rate96 | 4 |