Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Understanding Long Videos with Multimodal Language Models

About

Large Language Models (LLMs) have allowed recent LLM-based approaches to achieve excellent performance on long-video understanding benchmarks. We investigate how extensive world knowledge and strong reasoning skills of underlying LLMs influence this strong performance. Surprisingly, we discover that LLM-based approaches can yield surprisingly good accuracy on long-video tasks with limited video information, sometimes even with no video specific information. Building on this, we explore injecting video-specific information into an LLM-based framework. We utilize off-the-shelf vision tools to extract three object-centric information modalities from videos, and then leverage natural language as a medium for fusing this information. Our resulting Multimodal Video Understanding (MVU) framework demonstrates state-of-the-art performance across multiple video understanding benchmarks. Strong performance also on robotics domain tasks establish its strong generality. Code: https://github.com/kahnchana/mvu

Kanchana Ranasinghe, Xiang Li, Kumara Kahatapitiya, Michael S. Ryoo• 2024

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringActivityNet-QA (test)
Accuracy42.2
275
Video Question AnsweringNExT-QA (test)
Accuracy51.2
204
Video Question AnsweringEgoSchema (Full)
Accuracy37.6
193
Multiple-choice Video Question AnsweringEgoSchema
Accuracy37.6
61
Video Question AnsweringLongVideoBench
Accuracy50.4
34
Video Question AnsweringEgoSchema 5031 videos (test)
Top-1 Accuracy61.3
26
Video Question AnsweringNext-QA v1 (test)
Overall Acc73.3
24
Multi-choice Video Question AnsweringEgoSchema Subset 500 questions
Accuracy60.3
10
Robot ControlMetaWorld
Door Open Success Rate66.7
6
Video Question AnsweringEgoSchema ES-S (public subset)
Accuracy55.8
4
Showing 10 of 10 rows

Other info

Code

Follow for update