Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The All-Seeing Project V2: Towards General Relation Comprehension of the Open World

About

We present the All-Seeing Project V2: a new model and dataset designed for understanding object relations in images. Specifically, we propose the All-Seeing Model V2 (ASMv2) that integrates the formulation of text generation, object localization, and relation comprehension into a relation conversation (ReC) task. Leveraging this unified task, our model excels not only in perceiving and recognizing all objects within the image but also in grasping the intricate relation graph between them, diminishing the relation hallucination often encountered by Multi-modal Large Language Models (MLLMs). To facilitate training and evaluation of MLLMs in relation understanding, we created the first high-quality ReC dataset ({AS-V2) which is aligned with the format of standard instruction tuning data. In addition, we design a new benchmark, termed Circular-based Relation Probing Evaluation (CRPE) for comprehensively evaluating the relation comprehension capabilities of MLLMs. Notably, our ASMv2 achieves an overall accuracy of 52.04 on this relation-aware benchmark, surpassing the 43.14 of LLaVA-1.5 by a large margin. We hope that our work can inspire more future research and contribute to the evolution towards artificial general intelligence. Our project is released at https://github.com/OpenGVLab/all-seeing.

Weiyun Wang, Yiming Ren, Haowen Luo, Tiantong Li, Chenxiang Yan, Zhe Chen, Wenhai Wang, Qingyun Li, Lewei Lu, Xizhou Zhu, Yu Qiao, Jifeng Dai• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy81
1165
Visual Question AnsweringTextVQA
Accuracy60.2
1117
Visual Question AnsweringVizWiz
Accuracy58.1
1043
Visual Question AnsweringGQA
Accuracy63.9
963
Object Hallucination EvaluationPOPE
Accuracy86.3
935
Multimodal EvaluationMME
Score1.62e+3
557
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy84.81
345
Referring Expression ComprehensionRefCOCO (val)
Accuracy90.56
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.9424
333
Referring Expression ComprehensionRefCOCOg (test)
Accuracy88.26
291
Showing 10 of 31 rows

Other info

Code

Follow for update