Intent at a Glance: Gaze-Guided Robotic Manipulation via Foundation Models
About
Designing intuitive interfaces for robotic control remains a central challenge in enabling effective human-robot interaction, particularly in assistive care settings. Eye gaze offers a fast, non-intrusive, and intent-rich input modality, making it an attractive channel for conveying user goals. In this work, we present GAMMA (Gaze Assisted Manipulation for Modular Autonomy), a system that leverages ego-centric gaze tracking and a vision-language model to infer user intent and autonomously execute robotic manipulation tasks. By contextualizing gaze fixations within the scene, the system maps visual attention to high-level semantic understanding, enabling skill selection and parameterization without task-specific training. We evaluate GAMMA on a range of table-top manipulation tasks and compare it against baseline gaze-based control without reasoning. Results demonstrate that GAMMA provides robust, intuitive, and generalizable control, highlighting the potential of combining foundation models and gaze for natural and scalable robot autonomy. Project website: https://gamma0.vercel.app/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Intent Recognition | Scenario Static S1 | Selection Accuracy84 | 6 | |
| Intent Recognition | Scenario 1 Dynamic | Tracking Rate13 | 6 | |
| Robot Task Execution | Robot Task Scenarios Scenario S3 | Command Duration (s)7.6 | 5 | |
| Robot Task Execution | Robot Task Scenarios Scenario S4 | Command Duration (s)2.4 | 5 | |
| User Study | User Study | NASA-TLX Score51.05 | 5 |