Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UniTAB: Unifying Text and Box Outputs for Grounded Vision-Language Modeling

About

We propose UniTAB that Unifies Text And Box outputs for grounded vision-language (VL) modeling. Grounded VL tasks such as grounded captioning require the model to generate a text description and align predicted words with object regions. To achieve this, models must generate desired text and box outputs together, and meanwhile indicate the alignments between words and boxes. In contrast to existing solutions that use multiple separate modules for different outputs, UniTAB represents both text and box outputs with a shared token sequence, and introduces a special <obj> token to naturally indicate word-box alignments in the sequence. UniTAB thus could provide a more comprehensive and interpretable image description, by freely grounding generated words to object regions. On grounded captioning, UniTAB presents a simpler solution with a single output head, and significantly outperforms state of the art in both grounding and captioning evaluations. On general VL tasks that have different desired output formats (i.e., text, box, or their combination), UniTAB with a single network achieves better or comparable performance than task-specific state of the art. Experiments cover 7 VL benchmarks, including grounded captioning, visual grounding, image captioning, and visual question answering. Furthermore, UniTAB's unified multi-task network and the task-agnostic output sequence design make the model parameter efficient and generalizable to new tasks.

Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, Lijuan Wang• 2021

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)
CIDEr1.191
682
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy80.97
345
Referring Expression ComprehensionRefCOCO (val)
Accuracy88.59
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.9106
333
Referring Expression ComprehensionRefCOCOg (test)
Accuracy84.7
291
Referring Expression ComprehensionRefCOCOg (val)
Accuracy84.58
291
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy71.55
235
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy85.36
207
Referring Expression ComprehensionRefCOCO (testB)
Accuracy83.75
196
Referring Expression ComprehensionRefCOCO+ (test-A)
Accuracy85.36
172
Showing 10 of 52 rows

Other info

Follow for update