Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GroundingGPT:Language Enhanced Multi-modal Grounding Model

About

Multi-modal large language models have demonstrated impressive performance across various tasks in different modalities. However, existing multi-modal models primarily emphasize capturing global information within each modality while neglecting the importance of perceiving local information across modalities. Consequently, these models lack the ability to effectively understand the fine-grained details of input data, limiting their performance in tasks that require a more nuanced understanding. To address this limitation, there is a compelling need to develop models that enable fine-grained understanding across multiple modalities, thereby enhancing their applicability to a wide range of tasks. In this paper, we propose GroundingGPT, a language enhanced multi-modal grounding model. Beyond capturing global information like other multi-modal models, our proposed model excels at tasks demanding a detailed understanding of local information within the input. It demonstrates precise identification and localization of specific regions in images or moments in videos. To achieve this objective, we design a diversified dataset construction pipeline, resulting in a multi-modal, multi-granularity dataset for model training. The code, dataset, and demo of our model can be found at https: //github.com/lzw-lzw/GroundingGPT.

Zhaowei Li, Qi Xu, Dong Zhang, Hang Song, Yiqing Cai, Qi Qi, Ran Zhou, Junting Pan, Zefeng Li, Van Tu Vu, Zhida Huang, Tao Wang• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy55.1
1525
Object Hallucination EvaluationPOPE
Accuracy87.4
1455
Visual Question AnsweringTextVQA
Accuracy55.2
1285
Video Question AnsweringMSRVTT-QA
Accuracy51.6
491
Video Question AnsweringActivityNet-QA
Accuracy44.7
376
Video Question AnsweringMSVD-QA
Accuracy67.8
360
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy81.61
354
Referring Expression ComprehensionRefCOCO (val)
Accuracy88.02
344
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.9155
342
Referring Expression ComprehensionRefCOCOg (test)
Accuracy81.99
300
Showing 10 of 51 rows

Other info

Code

Follow for update