Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Future Transformer for Long-term Action Anticipation

About

The task of predicting future actions from a video is crucial for a real-world agent interacting with others. When anticipating actions in the distant future, we humans typically consider long-term relations over the whole sequence of actions, i.e., not only observed actions in the past but also potential actions in the future. In a similar spirit, we propose an end-to-end attention model for action anticipation, dubbed Future Transformer (FUTR), that leverages global attention over all input frames and output tokens to predict a minutes-long sequence of future actions. Unlike the previous autoregressive models, the proposed method learns to predict the whole sequence of future actions in parallel decoding, enabling more accurate and fast inference for long-term anticipation. We evaluate our method on two standard benchmarks for long-term action anticipation, Breakfast and 50 Salads, achieving state-of-the-art results.

Dayoung Gong, Joonseok Lee, Manjin Kim, Seong Jong Ha, Minsu Cho• 2022

Related benchmarks

TaskDatasetResultRank
Action AnticipationBreakfast
MoC Accuracy32.27
64
Action AnticipationDARai (Coarse)
MoC Accuracy40.71
64
Long-term Action Anticipation50 Salads
MoC Accuracy39.55
56
Action AnticipationUTKinects
MoC Accuracy29.63
56
Action AnticipationNTURGBD
MoC Accuracy20.13
56
Action AnticipationDARai Fine-grained
MoC Accuracy0.1859
56
Action AnticipationEpic-Kitchen 55 (val)
Top-1 Acc12.3
33
Long-term Action Anticipation50 Salads (test)
MoC (alpha=0.2, beta=0.1)39.55
10
Long-term Action AnticipationBreakfast (test)
MoC (alpha=0.2, beta=0.1)27.7
9
Long-term Action Anticipation50Salads alpha=0.2
Anticipation Score (alpha=0.2) @ Horizon 0.0151.16
3
Showing 10 of 11 rows

Other info

Code

Follow for update