GazeFormer-MoE: Context-Aware Gaze Estimation via CLIP and MoE Transformer
About
We present a semantics modulated, multi scale Transformer for 3D gaze estimation. Our model conditions CLIP global features with learnable prototype banks (illumination, head pose, background, direction), fuses these prototype-enriched global vectors with CLIP patch tokens and high-resolution CNN tokens in a unified attention space, and replaces several FFN blocks with routed/shared Mixture of Experts to increase conditional capacity. Evaluated on MPIIFaceGaze, EYEDIAP, Gaze360 and ETH-XGaze, our model achieves new state of the art angular errors of 2.49{\deg}, 3.22{\deg}, 10.16{\deg}, and 1.44{\deg}, demonstrating up to a 64% relative improvement over previously reported results. ablations attribute gains to prototype conditioning, cross scale fusion, MoE and hyperparameter. Our code is publicly available at https://github. com/AIPMLab/Gazeformer.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Gaze Estimation | MPIIFaceGaze M (test) | Gaze Error (degrees)2.49 | 15 | |
| Gaze Estimation | EYEDIAP (E) (test) | Mean Gaze Error (degrees)3.22 | 15 | |
| Gaze Estimation | Gaze360 G (test) | Angular Error (degrees)10.16 | 10 | |
| Gaze Estimation | ETH-XGaze Et (test) | Gaze Error (degrees)1.44 | 5 |