Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild

About

Large language models with instruction-following abilities have revolutionized the field of artificial intelligence. These models show exceptional generalizability to tackle various real-world tasks through their natural language interfaces. However, their performance heavily relies on high-quality exemplar data, which is often difficult to obtain. This challenge is further exacerbated when it comes to multimodal instruction following. We introduce TextBind, an almost annotation-free framework for empowering larger language models with the multi-turn interleaved multimodal instruction-following capabilities. Our approach requires only image-caption pairs and generates multi-turn multimodal instruction-response conversations from a language model. To accommodate interleaved image-text inputs and outputs, we devise MIM, a language model-centric architecture that seamlessly integrates image encoder and decoder models. We release our dataset, model, and demo to foster future research in the area of multimodal instruction following.

Huayang Li, Siheng Li, Deng Cai, Longyue Wang, Lemao Liu, Taro Watanabe, Yujiu Yang, Shuming Shi• 2023

Related benchmarks

TaskDatasetResultRank
Multimodal EvaluationMME--
557
Multimodal EvaluationMMBench--
118
Large Multimodal Model EvaluationMM-Vet
Average Score23.9
58
Textual response generationTEXTBINDEVAL
BLEU-227.64
7
Lexical Diversity AnalysisMultimodal Instruction-Tuning Datasets (train)
Instruct Diversity Score1.76
6
Image GenerationTEXTBINDEVAL
CLIP Sim (T1)0.64
5
Multi-turn Multimodal Instruction-followingTEXTBINDEVAL 1.0 (test)
Overall Score3.39
3
Showing 7 of 7 rows

Other info

Code

Follow for update