Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

IVRA: Improving Visual-Token Relations for Robot Action Policy with Training-Free Hint-Based Guidance

About

Many Vision-Language-Action (VLA) models flatten image patches into a 1D token sequence, weakening the 2D spatial cues needed for precise manipulation. We introduce IVRA, a lightweight, training-free method that improves spatial understanding by exploiting affinity hints already available in the model's built-in vision encoder, without requiring any external encoder or retraining. IVRA selectively injects these affinity signals into a language-model layer in which instance-level features reside. This inference-time intervention realigns visual-token interactions and better preserves geometric structure while keeping all model parameters fixed. We demonstrate the generality of IVRA by applying it to diverse VLA architectures (LLaRA, OpenVLA, and FLOWER) across simulated benchmarks spanning both 2D and 3D manipulation (VIMA and LIBERO) and on various real-robot tasks. On 2D VIMA, IVRA improves average success by +4.2% over the baseline LLaRA in a low-data regime. On 3D LIBERO, it yields consistent gains over the OpenVLA and FLOWER baselines, including improvements when baseline accuracy is near saturation (96.3% to 97.1%). All code and models will be released publicly. Visualizations are available at: jongwoopark7978.github.io/IVRA

Jongwoo Park, Kanchana Ranasinghe, Jinhyeok Jang, Cristina Mata, Yoo Sung Jang, Michael S Ryoo• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement97.6
494
Robot manipulation generalizationVIMA-Bench
Novel Task37.5
5
Cluttered Localization (T3)Real-world robot experiments
Success Rate75
4
Color Match (T2)Real-world robot experiments
Success Rate60
4
Relative Height (T4)Real-world robot experiments
Success Rate7.00e+3
4
Target Object (T1)Real-world robot experiments
Success Rate6.00e+3
4
Showing 6 of 6 rows

Other info

Follow for update