Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Language Repository for Long Video Understanding

About

Language has become a prominent modality in computer vision with the rise of LLMs. Despite supporting long context-lengths, their effectiveness in handling long-term information gradually declines with input length. This becomes critical, especially in applications such as long-form video understanding. In this paper, we introduce a Language Repository (LangRepo) for LLMs, that maintains concise and structured information as an interpretable (i.e., all-textual) representation. Our repository is updated iteratively based on multi-scale video chunks. We introduce write and read operations that focus on pruning redundancies in text, and extracting information at various temporal scales. The proposed framework is evaluated on zero-shot visual question-answering benchmarks including EgoSchema, NExT-QA, IntentQA and NExT-GQA, showing state-of-the-art performance at its scale. Our code is available at https://github.com/kkahatapitiya/LangRepo.

Kumara Kahatapitiya, Kanchana Ranasinghe, Jongwoo Park, Michael S. Ryoo• 2024

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringEgoSchema (Full)
Accuracy41.2
193
Video Question AnsweringNExT-QA (val)
Overall Acc60.9
176
Video Question AnsweringNExT-QA Multi-choice
Accuracy60.9
102
Video Question AnsweringEgoSchema (test)
Accuracy41.2
80
Video Question AnsweringEgoSchema subset
Accuracy66.2
73
Video Question AnsweringEgoSchema 500-question subset
Accuracy66.2
50
Long-form Video UnderstandingEgoSchema
Accuracy41.2
38
Video Question AnsweringNExT-GQA (test)
Acc@GQA17.1
29
Grounded Video Question AnsweringNExT-GQA
mIoU18.5
28
Video Question AnsweringEgoSchema 5031 videos (test)
Top-1 Accuracy41.2
26
Showing 10 of 16 rows

Other info

Code

Follow for update