Describe Anything: Detailed Localized Image and Video Captioning
About
Generating detailed and accurate descriptions for specific regions in images and videos remains a fundamental challenge for vision-language models. We introduce the Describe Anything Model (DAM), a model designed for detailed localized captioning (DLC). DAM preserves both local details and global context through two key innovations: a focal prompt, which ensures high-resolution encoding of targeted regions, and a localized vision backbone, which integrates precise localization with its broader context. To tackle the scarcity of high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark designed to evaluate DLC without relying on reference captions. DAM sets new state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and detailed multi-sentence localized image and video captioning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Reasoning | MMStar | -- | 143 | |
| Real-world Multimodal Reasoning | RealworldQA | Accuracy54.3 | 57 | |
| Region Captioning | DLC-Bench | Pos. Score61.8 | 23 | |
| Visual Question Answering | GAR-Bench-VQA | Overall VQA Score38.2 | 17 | |
| Region Captioning | VideoRefer-D (test) | Average Score3.68 | 16 | |
| Localized relational captioning | GAR-Bench Cap | Overall Score13.1 | 15 | |
| Detailed localized video captioning | VideoRefer-BenchD Multi-Frame | Average Score3.34 | 10 | |
| Regional Captioning | Ref-L4 (test) | ROUGE-L37.1 | 8 | |
| Category-level image recognition | LVIS | Similarity Score89 | 8 | |
| Category-level image recognition | PACO | Similarity Score84.2 | 8 |