Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Do You See What I Am Pointing At? Gesture-Based Egocentric Video Question Answering

About

Understanding and answering questions based on a user's pointing gesture is essential for next-generation egocentric AI assistants. However, current Multimodal Large Language Models (MLLMs) struggle with such tasks due to the lack of gesture-rich data and their limited ability to infer fine-grained pointing intent from egocentric video. To address this, we introduce EgoPointVQA, a dataset and benchmark for gesture-grounded egocentric question answering, comprising 4000 synthetic and 400 real-world videos across multiple deictic reasoning tasks. Built upon it, we further propose Hand Intent Tokens (HINT), which encodes tokens derived from 3D hand keypoints using an off-the-shelf reconstruction model and interleaves them with the model input to provide explicit spatial and temporal context for interpreting pointing intent. We show that our model outperforms others in different backbones and model sizes. In particular, HINT-14B achieves 68.1% accuracy, on average over 6 tasks, surpassing the state-of-the-art, InternVL3-14B, by 6.6%. To further facilitate the open research, we will release the code, model, and dataset. Project page: https://yuuraa.github.io/papers/choi2026egovqa

Yura Choi, Roy Miles, Rolandos Alexandros Potamias, Ismail Elezi, Jiankang Deng, Stefanos Zafeiriou• 2026

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringEgoSchema
Accuracy67.1
161
Video Question AnsweringMVBench
Accuracy73.2
42
Egocentric Visual Question AnsweringEGOPOINTVQA (test)
Reference Accuracy75
19
Video Question AnsweringVideo-MME
Accuracy64.6
14
Video Question AnsweringEgoBlind
Accuracy57.5
2
Showing 5 of 5 rows

Other info

GitHub

Follow for update