Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Gaze-LLE: Gaze Target Estimation via Large-Scale Learned Encoders

About

We address the problem of gaze target estimation, which aims to predict where a person is looking in a scene. Predicting a person's gaze target requires reasoning both about the person's appearance and the contents of the scene. Prior works have developed increasingly complex, hand-crafted pipelines for gaze target estimation that carefully fuse features from separate scene encoders, head encoders, and auxiliary models for signals like depth and pose. Motivated by the success of general-purpose feature extractors on a variety of visual tasks, we propose Gaze-LLE, a novel transformer framework that streamlines gaze target estimation by leveraging features from a frozen DINOv2 encoder. We extract a single feature representation for the scene, and apply a person-specific positional prompt to decode gaze with a lightweight module. We demonstrate state-of-the-art performance across several gaze benchmarks and provide extensive analysis to validate our design choices. Our code is available at: http://github.com/fkryan/gazelle .

Fiona Ryan, Ajay Bati, Sangmin Lee, Daniel Bolya, Judy Hoffman, James M. Rehg• 2024

Related benchmarks

TaskDatasetResultRank
Gaze target estimationGazeFollow
AUC0.958
45
Gaze target estimationVideoAttentionTarget
L2 Distance0.103
39
Gaze FollowingVideoAttentionTarget (test)
AUC0.937
20
Gaze target estimationChildPlay (test)
AUC95.1
11
Gaze target estimationGazeFollow360
Spherical Distance0.759
10
Gaze target estimationChildPlay
L2 Distance0.101
5
Gaze target estimationEYEDIAP
AUC61.7
5
Gaze target estimationGOO-Real (test)
AUC90.1
4
Joint Shared Attention Estimation and Group DetectionVideoCoAtt (test)
GroupAP (theta_IoU=0.5, theta_Dist=0.05)15.6
4
Shared Attention EstimationChildPlay
GroupAP (IoU=0.5, Dist=0.05)5.8
4
Showing 10 of 12 rows

Other info

Code

Follow for update