ViperGPT: Visual Inference via Python Execution for Reasoning
About
Answering visual queries is a complex task that requires both visual processing and reasoning. End-to-end models, the dominant approach for this task, do not explicitly differentiate between the two, limiting interpretability and generalization. Learning modular programs presents a promising alternative, but has proven challenging due to the difficulty of learning both the programs and modules simultaneously. We introduce ViperGPT, a framework that leverages code-generation models to compose vision-and-language models into subroutines to produce a result for any query. ViperGPT utilizes a provided API to access the available modules, and composes them by generating Python code that is later executed. This simple approach requires no further training, and achieves state-of-the-art results across various complex visual tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Referring Expression Comprehension | RefCOCO (testA) | -- | 342 | |
| Visual Question Answering | OK-VQA (test) | Accuracy51.9 | 327 | |
| Referring Expression Comprehension | RefCOCOg (test) | -- | 300 | |
| Referring Expression Comprehension | RefCOCO+ (testB) | -- | 244 | |
| Referring Expression Comprehension | RefCOCO+ (testA) | -- | 216 | |
| Referring Expression Comprehension | RefCOCO (testB) | -- | 205 | |
| Video Question Answering | NExT-QA (test) | -- | 204 | |
| Visual Question Answering | A-OKVQA | Acc49.9 | 202 | |
| Visual Question Answering | GQA (test) | Accuracy37.9 | 188 | |
| Visual Question Answering | GQA (test-dev) | Accuracy48.1 | 184 |