Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Video-CoE: Reinforcing Video Event Prediction via Chain of Events

About

Despite advances in the application of MLLMs for various video tasks, video event prediction (VEP) remains relatively underexplored. VEP requires the model to perform fine-grained temporal modeling of videos and establish logical relationships between videos and future events, which current MLLMs still struggle with. In this work, we first present a comprehensive evaluation of current leading MLLMs on the VEP task, revealing the reasons behind their inaccurate predictions, including lack of logical reasoning ability for future events prediction and insufficient utilization of visual information. To address these challenges, we propose \textbf{C}hain \textbf{o}f \textbf{E}vents (\textbf{CoE}) paradigm, which constructs temporal event chains to implicitly enforce MLLM focusing on the visual content and the logical connections between videos and future events, incentivizing model's reasoning capability with multiple training protocols. Experimental results on public benchmarks demonstrate that our method outperforms both leading open-source and commercial MLLMs, establishing a new state-of-the-art on the VEP task. Codes and models will be released soon.

Qile Su, Jing Tang, Rui Chen, Lei Sun, Xiangxiang Chu• 2026

Related benchmarks

TaskDatasetResultRank
Future Event PredictionFutureBench (test)
1-Hop Score80.9
20
Video Event PredictionAVEP (test)
Verb Score12.24
10
Video Event PredictionAVEP (val)
Verb F1-Score18.75
10
Action-centric Video Event PredictionAVEP (test)
Verb Score18.75
4
Video Event PredictionFutureBench (test)
AVG75
3
Showing 5 of 5 rows

Other info

Follow for update