Self-Regulated Learning for Egocentric Video Activity Anticipation
About
Future activity anticipation is a challenging problem in egocentric vision. As a standard future activity anticipation paradigm, recursive sequence prediction suffers from the accumulation of errors. To address this problem, we propose a simple and effective Self-Regulated Learning framework, which aims to regulate the intermediate representation consecutively to produce representation that (a) emphasizes the novel information in the frame of the current time-stamp in contrast to previously observed content, and (b) reflects its correlation with previously observed frames. The former is achieved by minimizing a contrastive loss, and the latter can be achieved by a dynamic reweighing mechanism to attend to informative frames in the observed content with a similarity comparison between feature of the current frame and observed frames. The learned final video representation can be further enhanced by multi-task learning which performs joint feature learning on the target activity labels and the automatically detected action and object class tokens. SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets. Its effectiveness is also verified by the experimental fact that the action and object concepts that support the activity semantics can be accurately identified.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Anticipation | EPIC-KITCHENS unseen S2 (test) | Top-1 Acc (Verb)27.42 | 47 | |
| Action Anticipation | Epic-Kitchen 55 (val) | -- | 33 | |
| Action Anticipation | EPIC-KITCHENS seen S1 (test) | Top-1 Acc (Verb)34.89 | 27 | |
| Dense Action Anticipation | 50 Salads (50S) | Top-1 Acc (tau_o=20%, tau_a=10%)37.9 | 8 | |
| Action Anticipation | EGTEA Gaze+ (average across 3 splits) | Top-5 Action Accuracy (1.0s)70.7 | 5 |