Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TTPP: Temporal Transformer with Progressive Prediction for Efficient Action Anticipation

About

Video action anticipation aims to predict future action categories from observed frames. Current state-of-the-art approaches mainly resort to recurrent neural networks to encode history information into hidden states, and predict future actions from the hidden representations. It is well known that the recurrent pipeline is inefficient in capturing long-term information which may limit its performance in predication task. To address this problem, this paper proposes a simple yet efficient Temporal Transformer with Progressive Prediction (TTPP) framework, which repurposes a Transformer-style architecture to aggregate observed features, and then leverages a light-weight network to progressively predict future features and actions. Specifically, predicted features along with predicted probabilities are accumulated into the inputs of subsequent prediction. We evaluate our approach on three action datasets, namely TVSeries, THUMOS-14, and TV-Human-Interaction. Additionally we also conduct a comprehensive study for several popular aggregation and prediction strategies. Extensive results show that TTPP not only outperforms the state-of-the-art methods but also more efficient.

Wen Wang, Xiaojiang Peng, Yanzhou Su, Yu Qiao, Jian Cheng• 2020

Related benchmarks

TaskDatasetResultRank
Action AnticipationTVSeries (test)
mcAP77.9
22
Action AnticipationTHUMOS-14 (test)
mAP40.9
14
Showing 2 of 2 rows

Other info

Follow for update