Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

vid-TLDR: Training Free Token merging for Light-weight Video Transformer

About

Video Transformers have become the prevalent solution for various video downstream tasks with superior expressive power and flexibility. However, these video transformers suffer from heavy computational costs induced by the massive number of tokens across the entire video frames, which has been the major barrier to training the model. Further, the patches irrelevant to the main contents, e.g., backgrounds, degrade the generalization performance of models. To tackle these issues, we propose training free token merging for lightweight video Transformer (vid-TLDR) that aims to enhance the efficiency of video Transformers by merging the background tokens without additional training. For vid-TLDR, we introduce a novel approach to capture the salient regions in videos only with the attention map. Further, we introduce the saliency-aware token merging strategy by dropping the background tokens and sharpening the object scores. Our experiments show that vid-TLDR significantly mitigates the computational complexity of video Transformers while achieving competitive performance compared to the base model without vid-TLDR. Code is available at https://github.com/mlvlab/vid-TLDR.

Joonmyung Choi, Sanghyeok Lee, Jaewon Chu, Minhyuk Choi, Hyunwoo J. Kim• 2024

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringMSRVTT-QA (test)
Accuracy47
371
Text-to-Video RetrievalDiDeMo
R@10.723
360
Text-to-Video RetrievalMSR-VTT
Recall@158.5
313
Video Question AnsweringMSVD-QA (test)
Accuracy54.9
274
Text-to-Video RetrievalMSVD
R@157.9
218
Text-to-Video RetrievalActivityNet
R@166.7
197
Video-to-Text retrievalDiDeMo
R@165.2
108
Text-to-Video RetrievalMSRVTT
R@158.1
75
T-to-V RetrievalANET
Recall@141.8
21
Video-Text RetrievalMSRVTT
GFLOPS44.7
18
Showing 10 of 12 rows

Other info

Code

Follow for update