Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

About

In this work, we pursue a unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, a Task-Agnostic and Modality-Agnostic framework that supports Task Comprehensiveness. OFA unifies a diverse set of cross-modal and unimodal tasks, including image generation, visual grounding, image captioning, image classification, language modeling, etc., in a simple sequence-to-sequence learning framework. OFA follows the instruction-based learning in both pretraining and finetuning stages, requiring no extra task-specific layers for downstream tasks. In comparison with the recent state-of-the-art vision & language models that rely on extremely large cross-modal datasets, OFA is pretrained on only 20M publicly available image-text pairs. Despite its simplicity and relatively small-scale training data, OFA achieves new SOTAs in a series of cross-modal tasks while attaining highly competitive performances on uni-modal tasks. Our further analysis indicates that OFA can also effectively transfer to unseen tasks and unseen domains. Our code and models are publicly available at https://github.com/OFA-Sys/OFA.

Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang• 2022

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)
CIDEr146.7
682
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy82
664
Image ClassificationImageNet-1K
Top-1 Acc85.6
524
Image ClassificationFlowers102
Accuracy96.9
478
Visual Question AnsweringVQA v2 (test-std)
Accuracy82
466
Natural Language UnderstandingGLUE
SST-296.6
452
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy85.8
345
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy82
337
Referring Expression ComprehensionRefCOCO (val)
Accuracy90.05
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy83.87
333
Showing 10 of 117 rows
...

Other info

Code

Follow for update