Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models

About

Solving complex visual tasks such as "Who invented the musical instrument on the right?" involves a composition of skills: understanding space, recognizing instruments, and also retrieving prior knowledge. Recent work shows promise by decomposing such tasks using a large language model (LLM) into an executable program that invokes specialized vision models. However, generated programs are error-prone: they omit necessary steps, include spurious ones, and are unable to recover when the specialized models give incorrect outputs. Moreover, they require loading multiple models, incurring high latency and computation costs. We propose Visual Program Distillation (VPD), an instruction tuning framework that produces a vision-language model (VLM) capable of solving complex visual tasks with a single forward pass. VPD distills the reasoning ability of LLMs by using them to sample multiple candidate programs, which are then executed and verified to identify a correct one. It translates each correct program into a language description of the reasoning steps, which are then distilled into a VLM. Extensive experiments show that VPD improves the VLM's ability to count, understand spatial relations, and reason compositionally. Our VPD-trained PaLI-X outperforms all prior VLMs, achieving state-of-the-art performance across complex vision tasks, including MMBench, OK-VQA, A-OKVQA, TallyQA, POPE, and Hateful Memes. An evaluation with human annotators also confirms that VPD improves model response factuality and consistency. Finally, experiments on content moderation demonstrate that VPD is also helpful for adaptation to real-world applications with limited data.

Yushi Hu, Otilia Stretcu, Chun-Ta Lu, Krishnamurthy Viswanathan, Kenji Hata, Enming Luo, Ranjay Krishna, Ariel Fuxman• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy83.9
1165
Visual Question AnsweringTextVQA
Accuracy65.4
1117
Visual Question AnsweringGQA
Accuracy64.9
963
Object Hallucination EvaluationPOPE
Accuracy88.8
935
Visual Question AnsweringOK-VQA
Accuracy84.7
224
Multimodal Model EvaluationMMBench
Accuracy76.2
180
Visual Question AnsweringGQA (test-dev)
Accuracy67.3
178
Visual Question AnsweringA-OKVQA (test)
Accuracy80.4
79
Visual Question AnsweringOK-VQA (val)
Accuracy66.8
47
Meme ClassificationHatefulMemes
AUC89.2
43
Showing 10 of 14 rows

Other info

Code

Follow for update