Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ZEETAD: Adapting Pretrained Vision-Language Model for Zero-Shot End-to-End Temporal Action Detection

About

Temporal action detection (TAD) involves the localization and classification of action instances within untrimmed videos. While standard TAD follows fully supervised learning with closed-set setting on large training data, recent zero-shot TAD methods showcase the promising open-set setting by leveraging large-scale contrastive visual-language (ViL) pretrained models. However, existing zero-shot TAD methods have limitations on how to properly construct the strong relationship between two interdependent tasks of localization and classification and adapt ViL model to video understanding. In this work, we present ZEETAD, featuring two modules: dual-localization and zero-shot proposal classification. The former is a Transformer-based module that detects action events while selectively collecting crucial semantic embeddings for later recognition. The latter one, CLIP-based module, generates semantic embeddings from text and frame inputs for each temporal unit. Additionally, we enhance discriminative capability on unseen classes by minimally updating the frozen CLIP encoder with lightweight adapters. Extensive experiments on THUMOS14 and ActivityNet-1.3 datasets demonstrate our approach's superior performance in zero-shot TAD and effective knowledge transfer from ViL models to unseen action categories.

Thinh Phan, Khoa Vo, Duy Le, Gianfranco Doretto, Donald Adjeroh, Ngan Le• 2023

Related benchmarks

TaskDatasetResultRank
Temporal Action DetectionTHUMOS 75% Seen / 25% Unseen 14
mAP@0.361.4
11
Temporal Action DetectionTHUMOS 50% Seen / 50% Unseen 14
mAP@0.345.2
11
Temporal Action DetectionActivityNet v1.3 (50% Seen 50% Unseen)
mAP@0.5039.2
11
Temporal Action DetectionActivityNet 75% Seen / 25% Unseen v1.3
mAP @ IoU=0.551
11
Showing 4 of 4 rows

Other info

Follow for update