Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhancing Video Transformers for Action Understanding with VLM-aided Training

About

Owing to their ability to extract relevant spatio-temporal video embeddings, Vision Transformers (ViTs) are currently the best performing models in video action understanding. However, their generalization over domains or datasets is somewhat limited. In contrast, Visual Language Models (VLMs) have demonstrated exceptional generalization performance, but are currently unable to process videos. Consequently, they cannot extract spatio-temporal patterns that are crucial for action understanding. In this paper, we propose the Four-tiered Prompts (FTP) framework that takes advantage of the complementary strengths of ViTs and VLMs. We retain ViTs' strong spatio-temporal representation ability but improve the visual encodings to be more comprehensive and general by aligning them with VLM outputs. The FTP framework adds four feature processors that focus on specific aspects of human action in videos: action category, action components, action description, and context information. The VLMs are only employed during training, and inference incurs a minimal computation cost. Our approach consistently yields state-of-the-art performance. For instance, we achieve remarkable top-1 accuracy of 93.8% on Kinetics-400 and 83.4% on Something-Something V2, surpassing VideoMAEv2 by 2.8% and 2.6%, respectively.

Hui Lu, Hu Jian, Ronald Poppe, Albert Ali Salah• 2024

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy79.8
535
Video Action RecognitionKinetics 400 (val)
Top-1 Acc94.3
151
Action RecognitionUCF-101
Top-1 Acc99.7
147
Action DetectionAVA v2.2 (val)
mAP46.2
99
Video ClassificationKinetics-600 (val)
Accuracy94.4
84
Showing 5 of 5 rows

Other info

Follow for update