Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive Gazing
About
Multi-modal large language models (MLLMs) have advanced general-purpose video understanding but struggle with long, high-resolution videos -- they process every pixel equally in their vision transformers (ViTs) or LLMs despite significant spatiotemporal redundancy. We introduce AutoGaze, a lightweight module that removes redundant patches before processed by a ViT or an MLLM. Trained with next-token prediction and reinforcement learning, AutoGaze autoregressively selects a minimal set of multi-scale patches that can reconstruct the video within a user-specified error threshold, eliminating redundancy while preserving information. Empirically, AutoGaze reduces visual tokens by 4x-100x and accelerates ViTs and MLLMs by up to 19x, enabling scaling MLLMs to 1K-frame 4K-resolution videos and achieving superior results on video benchmarks (e.g., 67.0% on VideoMME). Furthermore, we introduce HLVid: the first high-resolution, long-form video QA benchmark with 5-minute 4K-resolution videos, where an MLLM scaled with AutoGaze improves over the baseline by 10.1% and outperforms the previous best MLLM by 4.5%. Project page: https://autogaze.github.io/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long Video Understanding | MLVU | -- | 154 | |
| Video Understanding | MVBench (test) | -- | 151 | |
| Video Question Answering | NExT-QA Multi-choice | Accuracy82.8 | 114 | |
| Video Understanding | VideoMME w/o sub | Score67 | 18 | |
| Video Understanding | VideoMME w/ sub | Score71.8 | 12 | |
| Long Video Understanding | L-VidBench (val) | Score61 | 12 | |
| High-resolution & Long Video Understanding | HLVid (test) | Score52.6 | 11 | |
| Long Video Understanding | EgoSchema (test) | Accuracy66.9 | 10 |