CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
About
When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | CLEVR (test) | Overall Accuracy92.6 | 61 | |
| Visual Question Answering | CLEVR 1.0 (test) | Overall Accuracy92.6 | 46 | |
| Video Question Answering | STAR (test) | Interaction Score25.06 | 42 | |
| Visual Question Answering | CLEVR-Humans | Accuracy57.6 | 24 | |
| Visual Question Answering | CLEVR-Humans 1.0 (test) | Accuracy43.2 | 22 | |
| Visual Question Answering | CLEVR-CoGenT (Condition A) | Accuracy96.6 | 21 | |
| Visual Question Answering | CLEVR-CoGenT Condition B | Accuracy92.7 | 18 | |
| Visual Question Answering | CLEVR (val) | Overall Accuracy92.6 | 15 | |
| Visual Question Answering | CLEVR pixels (test) | Overall Accuracy92.6 | 7 |