Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models

About

We introduce Groma, a Multimodal Large Language Model (MLLM) with grounded and fine-grained visual perception ability. Beyond holistic image understanding, Groma is adept at region-level tasks such as region captioning and visual grounding. Such capabilities are built upon a localized visual tokenization mechanism, where an image input is decomposed into regions of interest and subsequently encoded into region tokens. By integrating region tokens into user instructions and model responses, we seamlessly enable Groma to understand user-specified region inputs and ground its textual output to images. Besides, to enhance the grounded chat ability of Groma, we curate a visually grounded instruction dataset by leveraging the powerful GPT-4V and visual prompting techniques. Compared with MLLMs that rely on the language model or external module for localization, Groma consistently demonstrates superior performances in standard referring and grounding benchmarks, highlighting the advantages of embedding localization into image tokenization. Project page: https://groma-mllm.github.io/.

Chuofan Ma, Yi Jiang, Jiannan Wu, Zehuan Yuan, Xiaojuan Qi• 2024

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO (val)--
613
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy83.9
345
Referring Expression ComprehensionRefCOCO (val)
Accuracy89.5
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.921
333
Referring Expression ComprehensionRefCOCOg (test)
Accuracy87
291
Referring Expression ComprehensionRefCOCOg (val)
Accuracy86.5
291
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy78.1
235
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy88.9
207
Referring Expression ComprehensionRefCOCO (testB)
Accuracy86.5
196
Referring Expression ComprehensionRefCOCO+ (test-A)
Accuracy86.5
172
Showing 10 of 18 rows

Other info

Code

Follow for update