Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GazeFormer-MoE: Context-Aware Gaze Estimation via CLIP and MoE Transformer

About

We present a semantics modulated, multi scale Transformer for 3D gaze estimation. Our model conditions CLIP global features with learnable prototype banks (illumination, head pose, background, direction), fuses these prototype-enriched global vectors with CLIP patch tokens and high-resolution CNN tokens in a unified attention space, and replaces several FFN blocks with routed/shared Mixture of Experts to increase conditional capacity. Evaluated on MPIIFaceGaze, EYEDIAP, Gaze360 and ETH-XGaze, our model achieves new state of the art angular errors of 2.49{\deg}, 3.22{\deg}, 10.16{\deg}, and 1.44{\deg}, demonstrating up to a 64% relative improvement over previously reported results. ablations attribute gains to prototype conditioning, cross scale fusion, MoE and hyperparameter. Our code is publicly available at https://github. com/AIPMLab/Gazeformer.

Xinyuan Zhao, Xianrui Chen, Ahmad Chaddad• 2026

Related benchmarks

TaskDatasetResultRank
Gaze EstimationMPIIFaceGaze M (test)
Gaze Error (degrees)2.49
15
Gaze EstimationEYEDIAP (E) (test)
Mean Gaze Error (degrees)3.22
15
Gaze EstimationGaze360 G (test)
Angular Error (degrees)10.16
10
Gaze EstimationETH-XGaze Et (test)
Gaze Error (degrees)1.44
5
Showing 4 of 4 rows

Other info

Follow for update