Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest

About

Visual instruction tuning large language model(LLM) on image-text pairs has achieved general-purpose vision-language abilities. However, the lack of region-text pairs limits their advancements to fine-grained multimodal understanding. In this paper, we propose spatial instruction tuning, which introduces the reference to the region-of-interest(RoI) in the instruction. Before sending to LLM, the reference is replaced by RoI features and interleaved with language embeddings as a sequence. Our model GPT4RoI, trained on 7 region-text pair datasets, brings an unprecedented interactive and conversational experience compared to previous image-level models. (1) Interaction beyond language: Users can interact with our model by both language and drawing bounding boxes to flexibly adjust the referring granularity. (2) Versatile multimodal abilities: A variety of attribute information within each RoI can be mined by GPT4RoI, e.g., color, shape, material, action, etc. Furthermore, it can reason about multiple RoIs based on common sense. On the Visual Commonsense Reasoning(VCR) dataset, GPT4RoI achieves a remarkable accuracy of 81.6%, surpassing all existing models by a significant margin (the second place is 75.6%) and almost reaching human-level performance of 85.0%. The code and model can be found at https://github.com/jshilong/GPT4RoI.

Shilong Zhang, Peize Sun, Shoufa Chen, Min Xiao, Wenqi Shao, Wenwei Zhang, Yu Liu, Kai Chen, Ping Luo• 2023

Related benchmarks

TaskDatasetResultRank
Panoptic SegmentationCityscapes (val)
PQ34.7
276
Instance SegmentationCityscapes (val)
AP21.93
239
Visual Commonsense ReasoningVCR (val)--
63
Visual Commonsense ReasoningVCR (Visual Commonsense Reasoning) (test)--
54
Panoptic SegmentationADE20K 150 59 (val)
Panoptic Quality (PQ)36.32
35
Instance SegmentationADE20K 150 59 (val)
AP26.08
30
Referring object classificationLVIS In-Domain
Accuracy58.59
26
Object ClassificationCOCO 2017 (val)
Accuracy64.01
23
Referring object classificationLVIS (test)
Accuracy58.59
22
Region CaptioningVisual Genome
METEOR17.6
18
Showing 10 of 28 rows

Other info

Code

Follow for update