Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos
About
We introduce an approach for pre-training egocentric video models using large-scale third-person video datasets. Learning from purely egocentric data is limited by low dataset scale and diversity, while using purely exocentric (third-person) data introduces a large domain mismatch. Our idea is to discover latent signals in third-person video that are predictive of key egocentric-specific properties. Incorporating these signals as knowledge distillation losses during pre-training results in models that benefit from both the scale and diversity of third-person video data, as well as representations that capture salient egocentric properties. Our experiments show that our Ego-Exo framework can be seamlessly integrated into standard video models; it outperforms all baselines when fine-tuned for egocentric activity recognition, achieving state-of-the-art results on Charades-Ego and EPIC-Kitchens-100.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Classification | Epic Kitchens 100 | Top-1 Verb Accuracy67 | 22 | |
| Action Recognition | Charades-Ego first-person (test) | mAP0.301 | 21 | |
| Fine-grained Keystep Recognition | EgoExo4D v2 (val) | Ego Accuracy37.17 | 11 | |
| Fine-grained Keystep Recognition | EgoExo4D v1 (val) | Ego Accuracy36.71 | 11 |