Tarsier: Recipes for Training and Evaluating Large Video Description Models
About
Generating fine-grained video descriptions is a fundamental challenge in video understanding. In this work, we introduce Tarsier, a family of large-scale video-language models designed to generate high-quality video descriptions. Tarsier employs CLIP-ViT to encode frames separately and then uses an LLM to model temporal relationships. Despite its simple architecture, we demonstrate that with a meticulously designed two-stage training procedure, the Tarsier models exhibit substantially stronger video description capabilities than any existing open-source model, showing a $+51.4\%$ advantage in human side-by-side evaluation over the strongest model. Additionally, they are comparable to state-of-the-art proprietary models, with a $+12.3\%$ advantage against GPT-4V and a $-6.7\%$ disadvantage against Gemini 1.5 Pro. When upgraded to Tarsier2 by building upon SigLIP and Qwen2-7B, it further improves significantly with a $+4.8\%$ advantage against GPT-4o. Besides video description, Tarsier proves to be a versatile generalist model, achieving new state-of-the-art results across nine public benchmarks, including multi-choice VQA, open-ended VQA, and zero-shot video captioning. Our second contribution is the introduction of a new benchmark -- DREAM-1K (https://tarsier-vlm.github.io/) for evaluating video description models, consisting of a new challenging dataset featuring videos from diverse sources and varying complexity, along with an automatic method specifically designed to assess the quality of fine-grained video descriptions. We make our models and evaluation benchmark publicly available at https://github.com/bytedance/tarsier.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Retrieval | MSR-VTT | -- | 313 | |
| Video Question Answering | NExT-QA (test) | Accuracy71.6 | 204 | |
| Video Question Answering | EgoSchema (Full) | Accuracy61.7 | 193 | |
| Video Question Answering | NExT-QA (val) | Overall Acc79.2 | 176 | |
| Text-to-Image Retrieval | COCO | -- | 130 | |
| Video Captioning | MSVD | CIDEr125.9 | 128 | |
| Video Question Answering | NEXT-QA | Overall Accuracy71.6 | 105 | |
| Video Question Answering | NExT-QA Multi-choice | Accuracy79.2 | 102 | |
| Video Question Answering | MVBench | Accuracy62.6 | 90 | |
| Video Question Answering | EgoSchema | Accuracy56 | 88 |