Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Anticipating Visual Representations from Unlabeled Video

About

Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.

Carl Vondrick, Hamed Pirsiavash, Antonio Torralba• 2015

Related benchmarks

TaskDatasetResultRank
Action AnticipationEpic-Kitchen 55 (val)--
33
Action AnticipationEGTEA Gaze+ (val)
Top-5 Action Accuracy55.7
27
Egocentric Action AnticipationEPIC-KITCHENS (val)
Top-5 Action Accuracy @ 1.0s16.9
17
Egocentric Action AnticipationEPIC-KITCHENS (test)
Top-5 Action Accuracy @ 1s16.86
11
Next Action AnticipationBreakfast (test)
Accuracy8.1
11
Action AnticipationActivityNet
Top-5 Acc (Ta=1.0s)52.39
10
Action AnticipationEGTEA Gaze+
Top-5 Acc (Ta=1.0s)55.7
8
Action AnticipationEPIC-Kitchens S2 unseen (test)
Top-1 Acc (Verb)24.79
7
Action AnticipationEPIC-Kitchens S1 (Seen Environments) 1.0 (test)
Top-1 Acc (Verb)26.53
7
Next Action Anticipation50Salads (test)
Accuracy6.2
6
Showing 10 of 12 rows

Other info

Follow for update