Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning

About

Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in short video understanding. However, understanding long-form videos still remains challenging for MLLMs. This paper proposes TimeSuite, a collection of new designs to adapt the existing short-form video MLLMs for long video understanding, including a simple yet efficient framework to process long video sequence, a high-quality video dataset for grounded tuning of MLLMs, and a carefully-designed instruction tuning task to explicitly incorporate the grounding supervision in the traditional QA format. Specifically, based on VideoChat, we propose our long-video MLLM, coined as VideoChat-T, by implementing a token shuffling to compress long video tokens and introducing Temporal Adaptive Position Encoding (TAPE) to enhance the temporal awareness of visual representation. Meanwhile, we introduce the TimePro, a comprehensive grounding-centric instruction tuning dataset composed of 9 tasks and 349k high-quality grounded annotations. Notably, we design a new instruction tuning task type, called Temporal Grounded Caption, to peform detailed video descriptions with the corresponding time stamps prediction. This explicit temporal location prediction will guide MLLM to correctly attend on the visual content when generating description, and thus reduce the hallucination risk caused by the LLMs. Experimental results demonstrate that our TimeSuite provides a successful solution to enhance the long video understanding capability of short-form MLLM, achieving improvement of 5.6% and 6.8% on the benchmarks of Egoschema and VideoMME, respectively. In addition, VideoChat-T exhibits robust zero-shot temporal grounding capabilities, significantly outperforming the existing state-of-the-art MLLMs. After fine-tuning, it performs on par with the traditional supervised expert models.

Xiangyu Zeng, Kunchang Li, Chenting Wang, Xinhao Li, Tianxiang Jiang, Ziang Yan, Songze Li, Yansong Shi, Zhengrong Yue, Yi Wang, Yali Wang, Yu Qiao, Limin Wang• 2024

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringEgoSchema (Full)
Accuracy60
193
Highlight DetectionQVHighlights (test)
HIT@155.3
151
Temporal Video GroundingCharades-STA (test)
Recall@IoU=0.567.1
117
Video GroundingCharades-STA
R@1 IoU=0.567.1
113
Video Question AnsweringEgoSchema (test)
Accuracy68.4
80
Video Question AnsweringVideoMME wo sub
Accuracy46.3
51
Video Question AnsweringMVBench (test)
Accuracy59.9
38
Video Question AnsweringVideo-MME Long Duration 1.0--
34
Temporal Video GroundingCharades-STA
Rank-1 Recall (IoU=0.5)67.1
33
Temporal GroundingCharades-STA--
33
Showing 10 of 17 rows

Other info

Code

Follow for update