Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MASTAF: A Model-Agnostic Spatio-Temporal Attention Fusion Network for Few-shot Video Classification

About

We propose MASTAF, a Model-Agnostic Spatio-Temporal Attention Fusion network for few-shot video classification. MASTAF takes input from a general video spatial and temporal representation,e.g., using 2D CNN, 3D CNN, and Video Transformer. Then, to make the most of such representations, we use self- and cross-attention models to highlight the critical spatio-temporal region to increase the inter-class variations and decrease the intra-class variations. Last, MASTAF applies a lightweight fusion network and a nearest neighbor classifier to classify each query video. We demonstrate that MASTAF improves the state-of-the-art performance on three few-shot video classification benchmarks(UCF101, HMDB51, and Something-Something-V2), e.g., by up to 91.6%, 69.5%, and 60.7% for five-way one-shot video classification, respectively.

Rex Liu, Huanle Zhang, Hamed Pirsiavash, Xin Liu• 2021

Related benchmarks

TaskDatasetResultRank
Action RecognitionSSv2 Few-shot
Top-1 Acc (5-way 1-shot)60.7
42
Few-shot Action RecognitionHMDB--
21
Few-shot Action RecognitionUCF
Accuracy (5-way 1-shot)91.6
9
Showing 3 of 3 rows

Other info

Follow for update