GIT: A Generative Image-to-text Transformer for Vision and Language
About
In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks. Codes are released at \url{https://github.com/microsoft/GenerativeImage2Text}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 | Accuracy81.7 | 1165 | |
| Visual Question Answering | TextVQA | Accuracy59.8 | 1117 | |
| Visual Question Answering | VizWiz | Accuracy71 | 1043 | |
| Image Captioning | MS COCO Karpathy (test) | CIDEr145 | 682 | |
| Visual Question Answering | VQA v2 (test-dev) | Overall Accuracy81.74 | 664 | |
| Video Question Answering | MSRVTT-QA | Accuracy45.6 | 481 | |
| Visual Question Answering | VQA v2 (test-std) | Accuracy81.92 | 466 | |
| Video Question Answering | MSRVTT-QA (test) | Accuracy45.6 | 371 | |
| Image Classification | ImageNet 1k (test) | Top-1 Accuracy89.22 | 359 | |
| Video Question Answering | MSVD-QA | Accuracy58.2 | 340 |