Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VideoChat-M1: Collaborative Policy Planning for Video Understanding via Multi-Agent Reinforcement Learning

About

By leveraging tool-augmented Multimodal Large Language Models (MLLMs), multi-agent frameworks are driving progress in video understanding. However, most of them adopt static and non-learnable tool invocation mechanisms, which limit the discovery of diverse clues essential for robust perception and reasoning regarding temporally or spatially complex videos. To address this challenge, we propose a novel Multi-agent system for video understanding, namely VideoChat-M1. Instead of using a single or fixed policy, VideoChat-M1 adopts a distinct Collaborative Policy Planning (CPP) paradigm with multiple policy agents, which comprises three key processes. (1) Policy Generation: Each agent generates its unique tool invocation policy tailored to the user's query; (2) Policy Execution: Each agent sequentially invokes relevant tools to execute its policy and explore the video content; (3) Policy Communication: During the intermediate stages of policy execution, agents interact with one another to update their respective policies. Through this collaborative framework, all agents work in tandem, dynamically refining their preferred policies based on contextual insights from peers to effectively respond to the user's query. Moreover, we equip our CPP paradigm with a concise Multi-Agent Reinforcement Learning (MARL) method. Consequently, the team of policy agents can be jointly optimized to enhance VideoChat-M1's performance, guided by both the final answer reward and intermediate collaborative process feedback. Extensive experiments demonstrate that VideoChat-M1 achieves SOTA performance across eight benchmarks spanning four tasks. Notably, on LongVideoBench, our method outperforms the SOTA model Gemini 2.5 pro by 3.6% and GPT-4o by 15.6%.

Boyu Chen, Zikang Wang, Zhengrong Yue, Kainan Yan, Chenyun Yu, Yi Huang, Zijun Liu, Yafei Wen, Xiaoxin Chen, Yang Liu, Peng Li, Yali Wang• 2025

Related benchmarks

TaskDatasetResultRank
Spatial ReasoningVSI-Bench
Avg Score71.9
192
Long Video UnderstandingMLVU--
154
Long-form Video UnderstandingLongVideoBench
Accuracy82.3
115
Video UnderstandingVideo-MME (test)
Accuracy83.2
51
Long Video QAVideo-MME
Average Score83.2
41
Video ReasoningVideo-Holmes
Accuracy60.5
37
Video UnderstandingLongVideoBench (test)
Accuracy (Overall)82.3
25
Video ReasoningMMMU Video
Accuracy80
16
Temporal GroundingCharades
mIoU67.7
15
Video ReasoningMMR-V
Accuracy60.4
7
Showing 10 of 10 rows

Other info

Follow for update