Anticipating Visual Representations from Unlabeled Video
About
Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Anticipation | Epic-Kitchen 55 (val) | -- | 33 | |
| Action Anticipation | EGTEA Gaze+ (val) | Top-5 Action Accuracy55.7 | 27 | |
| Egocentric Action Anticipation | EPIC-KITCHENS (val) | Top-5 Action Accuracy @ 1.0s16.9 | 17 | |
| Egocentric Action Anticipation | EPIC-KITCHENS (test) | Top-5 Action Accuracy @ 1s16.86 | 11 | |
| Next Action Anticipation | Breakfast (test) | Accuracy8.1 | 11 | |
| Action Anticipation | ActivityNet | Top-5 Acc (Ta=1.0s)52.39 | 10 | |
| Action Anticipation | EGTEA Gaze+ | Top-5 Acc (Ta=1.0s)55.7 | 8 | |
| Action Anticipation | EPIC-Kitchens S2 unseen (test) | Top-1 Acc (Verb)24.79 | 7 | |
| Action Anticipation | EPIC-Kitchens S1 (Seen Environments) 1.0 (test) | Top-1 Acc (Verb)26.53 | 7 | |
| Next Action Anticipation | 50Salads (test) | Accuracy6.2 | 6 |