DePlot: One-shot visual language reasoning by plot-to-table translation
About
Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Chart Question Answering | ChartQA | -- | 229 | |
| Chart Question Answering | ChartQA (test) | -- | 129 | |
| Visual Question Answering | ChartQA (test) | Accuracy70.5 | 58 | |
| Chart Question Answering | ChartQA (val) | Relaxed Acc (avg.)52.9 | 25 | |
| Visual Question Answering | PlotQA | Accuracy (v1)62.2 | 25 | |
| Chart Information Extraction | ChartQA (val) | mPrecision (IoU Range)81.5 | 15 | |
| Chart Information Extraction | PlotQA (val) | mPrecision (0.5:0.05:0.95)74.71 | 15 | |
| Chart-to-Table | ChartQA (test) | RMSF179.64 | 12 | |
| Figure Captioning | SciCap First sentence | BLEU15.27 | 10 | |
| Chart Question Answering | ChartQA human questions subset (test) | Relaxed Accuracy67.6 | 9 |