Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AVA-VLA: Improving Vision-Language-Action models with Active Visual Attention

About

Vision-Language-Action (VLA) models have shown remarkable progress in embodied tasks recently, but most methods process visual observations independently at each timestep. This history-agnostic design treats robot manipulation as a Markov Decision Process, even though real-world robotic control is inherently partially observable and requires reasoning over past interactions. To address this mismatch, we reformulate VLA policy learning from a Partially Observable Markov Decision Process perspective and propose AVA-VLA, a framework that conditions action generation on a recurrent state that serves as a neural approximation to the agent's belief over task history. Built on this recurrent state, we introduce Active Visual Attention (AVA), which dynamically reweights visual tokens in the current observation to focus on regions most relevant given both the instruction and execution history. Extensive experiments show that AVA-VLA achieves state-of-the-art performance on standard robotic benchmarks, including LIBERO and CALVIN, and transfers effectively to real-world dual-arm manipulation tasks. These results demonstrate the effectiveness of temporally grounded active visual processing for improving VLA performance in robotic sequential decision-making. The project page is available at https://liauto-dsr.github.io/AVA-VLA-Page.

Lei Xiao, Jifeng Li, Juntao Gao, Feiyang Ye, Yan Jin, Jingjing Qian, Jing Zhang, Yong Wu, Xiaoyuan Yu• 2025

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationLIBERO (test)
Object Success Rate99.6
45
Robot Manipulation Success RateLIBERO-Plus
Success Rate (Camera)69.4
11
Language-conditioned long-horizon robotic manipulationCalvin ABC->D
Success Rate (1 Task)99.6
8
Showing 3 of 3 rows

Other info

Follow for update