Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TARN: Temporal Attentive Relation Network for Few-Shot and Zero-Shot Action Recognition

About

In this paper we propose a novel Temporal Attentive Relation Network (TARN) for the problems of few-shot and zero-shot action recognition. At the heart of our network is a meta-learning approach that learns to compare representations of variable temporal length, that is, either two videos of different length (in the case of few-shot action recognition) or a video and a semantic representation such as word vector (in the case of zero-shot action recognition). By contrast to other works in few-shot and zero-shot action recognition, we a) utilise attention mechanisms so as to perform temporal alignment, and b) learn a deep-distance measure on the aligned representations at video segment level. We adopt an episode-based training scheme and train our network in an end-to-end manner. The proposed method does not require any fine-tuning in the target domain or maintaining additional representations as is the case of memory networks. Experimental results show that the proposed architecture outperforms the state of the art in few-shot action recognition, and achieves competitive results in zero-shot action recognition.

Mina Bishay, Georgios Zoumpourlis, Ioannis Patras• 2019

Related benchmarks

TaskDatasetResultRank
Action RecognitionKinetics
Accuracy (5-shot)80.66
47
Few-shot Action RecognitionKinetics (meta-test)
Accuracy78.5
46
Video RecognitionKinetics (test)
Accuracy80.7
42
Video Action RecognitionKinetics
Accuracy78.5
23
Action RecognitionUCF101 half classes (test)
Accuracy19
18
Zero-Shot Video ClassificationUCF
Top-1 Acc23.2
16
Action RecognitionHMDB51 half test classes
Accuracy19.5
11
Zero-Shot Video ClassificationHMDB
Top-1 Accuracy19.5
11
Showing 8 of 8 rows

Other info

Follow for update