Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Compositional Chain-of-Thought Prompting for Large Multimodal Models

About

The combination of strong visual backbones and Large Language Model (LLM) reasoning has led to Large Multimodal Models (LMMs) becoming the current standard for a wide range of vision and language (VL) tasks. However, recent research has shown that even the most advanced LMMs still struggle to capture aspects of compositional visual reasoning, such as attributes and relationships between objects. One solution is to utilize scene graphs (SGs)--a formalization of objects and their relations and attributes that has been extensively used as a bridge between the visual and textual domains. Yet, scene graph data requires scene graph annotations, which are expensive to collect and thus not easily scalable. Moreover, finetuning an LMM based on SG data can lead to catastrophic forgetting of the pretraining objective. To overcome this, inspired by chain-of-thought methods, we propose Compositional Chain-of-Thought (CCoT), a novel zero-shot Chain-of-Thought prompting method that utilizes SG representations in order to extract compositional knowledge from an LMM. Specifically, we first generate an SG using the LMM, and then use that SG in the prompt to produce a response. Through extensive experiments, we find that the proposed CCoT approach not only improves LMM performance on several vision and language VL compositional benchmarks but also improves the performance of several popular LMMs on general multimodal benchmarks, without the need for fine-tuning or annotated ground-truth SGs. Code: https://github.com/chancharikmitra/CCoT

Chancharik Mitra, Brandon Huang, Trevor Darrell, Roei Herzig• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy80.3
1165
Visual Question AnsweringTextVQA
Accuracy61.1
1117
Visual Question AnsweringVizWiz
Accuracy57.5
1043
Visual Question AnsweringGQA
Accuracy63.1
963
Multimodal ReasoningMM-Vet
MM-Vet Score36.1
281
Visual Question AnsweringPOPE
Accuracy87.3
71
Multi-modal UnderstandingLLaVA-Bench Wild
LLaVA^W Score91.2
52
Multimodal ReasoningMMBench--
50
Visual PerceptionMMVP
Accuracy30
47
Multimodal Science Question AnsweringScienceQA
Overall Average Score76.84
36
Showing 10 of 36 rows

Other info

Code

Follow for update