Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Leveraging Temporal Contextualization for Video Action Recognition

About

We propose a novel framework for video understanding, called Temporally Contextualized CLIP (TC-CLIP), which leverages essential temporal information through global interactions in a spatio-temporal domain within a video. To be specific, we introduce Temporal Contextualization (TC), a layer-wise temporal information infusion mechanism for videos, which 1) extracts core information from each frame, 2) connects relevant information across frames for the summarization into context tokens, and 3) leverages the context tokens for feature encoding. Furthermore, the Video-conditional Prompting (VP) module processes context tokens to generate informative prompts in the text modality. Extensive experiments in zero-shot, few-shot, base-to-novel, and fully-supervised action recognition validate the effectiveness of our model. Ablation studies for TC and VP support our design choices. Our project page with the source code is available at https://github.com/naver-ai/tc-clip

Minji Kim, Dongyoon Han, Taekyung Kim, Bohyung Han• 2024

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (test)--
333
Action RecognitionUCF101 (test)
Accuracy97.3
307
Action RecognitionHMDB51 (test)
Accuracy0.73
249
Action RecognitionKinetics-600 (test)
Top-1 Accuracy78.1
84
Base-to-New GeneralizationUCF101
Base Accuracy95.5
57
Action RecognitionSSv2 Few-shot--
42
Zero-shot Action RecognitionUCF101 (test)
Accuracy88.9
33
Action RecognitionHMDB-51 Few-shot
Top-1 Accuracy68.8
32
Action RecognitionUCF-101 Few-shot
Top-1 Accuracy94.6
30
Zero-shot Action RecognitionHMDB51 (test)
Accuracy57.1
25
Showing 10 of 18 rows

Other info

Code

Follow for update