Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioning

About

Dense image captioning is critical for cross-modal alignment in vision-language pretraining and text-to-image generation, but scaling expert-quality annotations is prohibitively expensive. While synthetic captioning via strong vision-language models (VLMs) is a practical alternative, supervised distillation often yields limited output diversity and weak generalization. Reinforcement learning (RL) could overcome these limitations, but its successes have so far been concentrated in verifiable domains that rely on deterministic checkers -- a luxury not available in open-ended captioning. We address this bottleneck with RubiCap, a novel RL framework that derives fine-grained, sample-specific reward signals from LLM-written rubrics. RubiCap first assembles a diverse committee of candidate captions, then employs an LLM rubric writer to extract consensus strengths and diagnose deficiencies in the current policy. These insights are converted into explicit evaluation criteria, enabling an LLM judge to decompose holistic quality assessment and replace coarse scalar rewards with structured, multi-faceted evaluations. Across extensive benchmarks, RubiCap achieves the highest win rates on CapArena, outperforming supervised distillation, prior RL methods, human-expert annotations, and GPT-4V-augmented outputs. On CaptionQA, it demonstrates superior word efficiency: our 7B model matches Qwen2.5-VL-32B-Instruct, and our 3B model surpasses its 7B counterpart. Remarkably, using the compact RubiCap-3B as a captioner produces stronger pretrained VLMs than those trained on captions from proprietary models.

Tzu-Heng Huang, Sirajul Salekin, Javier Movellan, Frederic Sala, Manjot Bilkhu• 2026

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA
Accuracy74.6
502
Multimodal ReasoningMM-Vet
MM-Vet Score25.05
431
Chart Question AnsweringChartQA
Accuracy36.6
356
Mathematical ReasoningMathVista
Accuracy34.8
257
Diagram Question AnsweringAI2D
AI2D Accuracy49.55
232
Multimodal BenchmarkingMMBench English
Accuracy67.61
125
Multi-discipline ReasoningMMMU
Accuracy37
34
Image CaptioningPixMoCap (test)
CapArena Win Rate (v.s. Base Model)70.8
16
Image CaptioningDenseFusion (test)
CapArena Win Rate (vs Base Model)59.2
11
Image CaptioningDenseFusion (50k Sampled Images)
CapArena Win Rate (vs Base Model)64.4
5
Showing 10 of 12 rows

Other info

Follow for update