Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning

About

Instruction tuning, a new learning paradigm that fine-tunes pre-trained language models on tasks specified through instructions, has shown promising zero-shot performance on various natural language processing tasks. However, it has yet to be explored for vision and multimodal tasks. In this work, we introduce MUL-TIINSTRUCT, the first multimodal instruction tuning benchmark dataset that consists of 62 diverse multimodal tasks in a unified seq-to-seq format covering 10 broad categories. The tasks are derived from 21 existing open-source datasets and each task is equipped with 5 expert-written instructions. We take OFA as the base pre-trained model for multimodal instruction tuning, and to further improve its zero-shot performance, we explore multiple transfer learning strategies to leverage the large-scale NATURAL INSTRUCTIONS dataset. Experimental results demonstrate strong zero-shot performance on various unseen multimodal tasks and the benefit of transfer learning from a text-only instruction dataset. We also design a new evaluation metric - Sensitivity, to evaluate how sensitive the model is to the variety of instructions. Our results indicate that fine-tuning the model on a diverse set of tasks and instructions leads to a reduced sensitivity to variations in instructions for each task.

Zhiyang Xu, Ying Shen, Lifu Huang• 2022

Related benchmarks

TaskDatasetResultRank
Multimodal EvaluationMME--
557
Multimodal EvaluationMMBench--
118
Large Multimodal Model EvaluationMM-Vet
Average Score17.2
58
Textual response generationTEXTBINDEVAL
BLEU-27.16
7
Lexical Diversity AnalysisMultimodal Instruction-Tuning Datasets (train)
Instruct Diversity Score0.51
6
Showing 5 of 5 rows

Other info

Follow for update