Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ACG: Action Coherence Guidance for Flow-based Vision-Language-Action models

About

Diffusion and flow matching models have emerged as powerful robot policies, enabling Vision-Language-Action (VLA) models to generalize across diverse scenes and instructions. Yet, when trained via imitation learning, their high generative capacity makes them sensitive to noise in human demonstrations: jerks, pauses, and jitter which reduce action coherence. Reduced action coherence causes instability and trajectory drift during deployment, failures that are catastrophic in fine-grained manipulation where precision is crucial. In this paper, we present Action Coherence Guidance (ACG) for VLA models, a training-free test-time guidance algorithm that improves action coherence and thereby yields performance gains. Evaluated on RoboCasa, DexMimicGen, and real-world SO-101 tasks, ACG consistently improves action coherence and boosts success rates across diverse manipulation tasks. Code and project page are available at https://github.com/DAVIAN-Robotics/ACG and https://DAVIAN-Robotics.github.io/ACG , respectively.

Minho Park, Kinam Kim, Junha Hyung, Hyojin Jang, Hoiyeong Jin, Jooyeol Yun, Hojoon Lee, Jaegul Choo• 2025

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationRoboCasa--
28
Robot ManipulationDexMG
Success Rate44
8
Robot ManipulationThree Strawberries SO-101
Success Rate74.4
8
Robot ManipulationTic-Tac-Toe SO-101
Success Rate56.7
8
Robot ManipulationAverage Across Simulation and Real-world
Success Rate53.6
8
Showing 5 of 5 rows

Other info

Follow for update