ViperGPT: Visual Inference via Python Execution for Reasoning
About
Answering visual queries is a complex task that requires both visual processing and reasoning. End-to-end models, the dominant approach for this task, do not explicitly differentiate between the two, limiting interpretability and generalization. Learning modular programs presents a promising alternative, but has proven challenging due to the difficulty of learning both the programs and modules simultaneously. We introduce ViperGPT, a framework that leverages code-generation models to compose vision-and-language models into subroutines to produce a result for any query. ViperGPT utilizes a provided API to access the available modules, and composes them by generating Python code that is later executed. This simple approach requires no further training, and achieves state-of-the-art results across various complex visual tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Referring Expression Comprehension | RefCOCO (testA) | -- | 333 | |
| Visual Question Answering | OK-VQA (test) | Accuracy51.9 | 296 | |
| Referring Expression Comprehension | RefCOCO+ (testA) | -- | 207 | |
| Video Question Answering | NExT-QA (test) | -- | 204 | |
| Visual Question Answering | GQA (test-dev) | Accuracy48.1 | 178 | |
| Video Question Answering | NExT-QA (val) | Overall Acc60 | 176 | |
| Visual Question Answering | A-OKVQA | Acc49.9 | 175 | |
| Visual Question Answering | GQA (test) | Accuracy37.9 | 119 | |
| Massive Multi-discipline Multimodal Understanding | MMMU | Accuracy54 | 88 | |
| Video Question Answering | EgoSchema 500-question subset | Accuracy15.8 | 50 |