Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Describe Anything: Detailed Localized Image and Video Captioning

About

Generating detailed and accurate descriptions for specific regions in images and videos remains a fundamental challenge for vision-language models. We introduce the Describe Anything Model (DAM), a model designed for detailed localized captioning (DLC). DAM preserves both local details and global context through two key innovations: a focal prompt, which ensures high-resolution encoding of targeted regions, and a localized vision backbone, which integrates precise localization with its broader context. To tackle the scarcity of high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark designed to evaluate DLC without relying on reference captions. DAM sets new state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and detailed multi-sentence localized image and video captioning.

Long Lian, Yifan Ding, Yunhao Ge, Sifei Liu, Hanzi Mao, Boyi Li, Marco Pavone, Ming-Yu Liu, Trevor Darrell, Adam Yala, Yin Cui• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningMMStar--
143
Real-world Multimodal ReasoningRealworldQA
Accuracy54.3
57
Region CaptioningDLC-Bench
Pos. Score61.8
23
Visual Question AnsweringGAR-Bench-VQA
Overall VQA Score38.2
17
Region CaptioningVideoRefer-D (test)
Average Score3.68
16
Localized relational captioningGAR-Bench Cap
Overall Score13.1
15
Detailed localized video captioningVideoRefer-BenchD Multi-Frame
Average Score3.34
10
Regional CaptioningRef-L4 (test)
ROUGE-L37.1
8
Category-level image recognitionLVIS
Similarity Score89
8
Category-level image recognitionPACO
Similarity Score84.2
8
Showing 10 of 17 rows

Other info

Follow for update